Oct 06 11:45:11 crc systemd[1]: Starting Kubernetes Kubelet... Oct 06 11:45:11 crc restorecon[4660]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:11 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:12 crc restorecon[4660]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 11:45:12 crc restorecon[4660]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 06 11:45:13 crc kubenswrapper[4698]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 11:45:13 crc kubenswrapper[4698]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 06 11:45:13 crc kubenswrapper[4698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 11:45:13 crc kubenswrapper[4698]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 11:45:13 crc kubenswrapper[4698]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 06 11:45:13 crc kubenswrapper[4698]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.074814 4698 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088507 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088548 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088554 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088560 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088566 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088575 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088582 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088589 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088596 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088604 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088612 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088619 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088625 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088631 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088636 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088643 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088648 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088654 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088659 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088664 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088670 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088676 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088682 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088687 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088692 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088698 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088704 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088710 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088715 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088721 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088727 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088732 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088738 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088744 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088757 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088763 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088769 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088774 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088780 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088787 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088793 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088798 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088803 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088811 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088819 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088826 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088834 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088842 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088849 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088855 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088860 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088866 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088871 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088876 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088881 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088887 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088892 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088897 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088902 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088909 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088914 4698 feature_gate.go:330] unrecognized feature gate: Example Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088920 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088925 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088930 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088935 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088944 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088952 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088959 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088965 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088970 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.088976 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089123 4698 flags.go:64] FLAG: --address="0.0.0.0" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089137 4698 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089149 4698 flags.go:64] FLAG: --anonymous-auth="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089156 4698 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089164 4698 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089170 4698 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089178 4698 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089186 4698 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089193 4698 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089199 4698 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089207 4698 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089215 4698 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089222 4698 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089229 4698 flags.go:64] FLAG: --cgroup-root="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089237 4698 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089246 4698 flags.go:64] FLAG: --client-ca-file="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089254 4698 flags.go:64] FLAG: --cloud-config="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089261 4698 flags.go:64] FLAG: --cloud-provider="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089268 4698 flags.go:64] FLAG: --cluster-dns="[]" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089275 4698 flags.go:64] FLAG: --cluster-domain="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089281 4698 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089288 4698 flags.go:64] FLAG: --config-dir="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089294 4698 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089300 4698 flags.go:64] FLAG: --container-log-max-files="5" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089309 4698 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089314 4698 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089321 4698 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089327 4698 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089334 4698 flags.go:64] FLAG: --contention-profiling="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089341 4698 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089347 4698 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089354 4698 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089360 4698 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089368 4698 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089374 4698 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089380 4698 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089386 4698 flags.go:64] FLAG: --enable-load-reader="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089393 4698 flags.go:64] FLAG: --enable-server="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089400 4698 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089410 4698 flags.go:64] FLAG: --event-burst="100" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089416 4698 flags.go:64] FLAG: --event-qps="50" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089422 4698 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089429 4698 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089435 4698 flags.go:64] FLAG: --eviction-hard="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089444 4698 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089450 4698 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089456 4698 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089465 4698 flags.go:64] FLAG: --eviction-soft="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089471 4698 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089478 4698 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089484 4698 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089490 4698 flags.go:64] FLAG: --experimental-mounter-path="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089498 4698 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089505 4698 flags.go:64] FLAG: --fail-swap-on="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089511 4698 flags.go:64] FLAG: --feature-gates="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089519 4698 flags.go:64] FLAG: --file-check-frequency="20s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089526 4698 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089533 4698 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089539 4698 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089546 4698 flags.go:64] FLAG: --healthz-port="10248" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089552 4698 flags.go:64] FLAG: --help="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089558 4698 flags.go:64] FLAG: --hostname-override="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089565 4698 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089573 4698 flags.go:64] FLAG: --http-check-frequency="20s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089580 4698 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089587 4698 flags.go:64] FLAG: --image-credential-provider-config="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089593 4698 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089599 4698 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089605 4698 flags.go:64] FLAG: --image-service-endpoint="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089611 4698 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089640 4698 flags.go:64] FLAG: --kube-api-burst="100" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089647 4698 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089654 4698 flags.go:64] FLAG: --kube-api-qps="50" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089660 4698 flags.go:64] FLAG: --kube-reserved="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089667 4698 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089673 4698 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089679 4698 flags.go:64] FLAG: --kubelet-cgroups="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089685 4698 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089691 4698 flags.go:64] FLAG: --lock-file="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089697 4698 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089703 4698 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089710 4698 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089719 4698 flags.go:64] FLAG: --log-json-split-stream="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089727 4698 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089734 4698 flags.go:64] FLAG: --log-text-split-stream="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089741 4698 flags.go:64] FLAG: --logging-format="text" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089747 4698 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089753 4698 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089759 4698 flags.go:64] FLAG: --manifest-url="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089765 4698 flags.go:64] FLAG: --manifest-url-header="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089774 4698 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089780 4698 flags.go:64] FLAG: --max-open-files="1000000" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089795 4698 flags.go:64] FLAG: --max-pods="110" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089804 4698 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089810 4698 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089816 4698 flags.go:64] FLAG: --memory-manager-policy="None" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089823 4698 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089829 4698 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089835 4698 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089842 4698 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089858 4698 flags.go:64] FLAG: --node-status-max-images="50" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089865 4698 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089871 4698 flags.go:64] FLAG: --oom-score-adj="-999" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089877 4698 flags.go:64] FLAG: --pod-cidr="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089883 4698 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089892 4698 flags.go:64] FLAG: --pod-manifest-path="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089898 4698 flags.go:64] FLAG: --pod-max-pids="-1" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089905 4698 flags.go:64] FLAG: --pods-per-core="0" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089911 4698 flags.go:64] FLAG: --port="10250" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089917 4698 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089923 4698 flags.go:64] FLAG: --provider-id="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089929 4698 flags.go:64] FLAG: --qos-reserved="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089935 4698 flags.go:64] FLAG: --read-only-port="10255" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089942 4698 flags.go:64] FLAG: --register-node="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089948 4698 flags.go:64] FLAG: --register-schedulable="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089954 4698 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089967 4698 flags.go:64] FLAG: --registry-burst="10" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089973 4698 flags.go:64] FLAG: --registry-qps="5" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089979 4698 flags.go:64] FLAG: --reserved-cpus="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089986 4698 flags.go:64] FLAG: --reserved-memory="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.089994 4698 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090000 4698 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090006 4698 flags.go:64] FLAG: --rotate-certificates="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090029 4698 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090036 4698 flags.go:64] FLAG: --runonce="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090042 4698 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090049 4698 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090056 4698 flags.go:64] FLAG: --seccomp-default="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090062 4698 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090069 4698 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090075 4698 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090082 4698 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090088 4698 flags.go:64] FLAG: --storage-driver-password="root" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090094 4698 flags.go:64] FLAG: --storage-driver-secure="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090100 4698 flags.go:64] FLAG: --storage-driver-table="stats" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090106 4698 flags.go:64] FLAG: --storage-driver-user="root" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090112 4698 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090118 4698 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090124 4698 flags.go:64] FLAG: --system-cgroups="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090130 4698 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090139 4698 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090145 4698 flags.go:64] FLAG: --tls-cert-file="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090151 4698 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090161 4698 flags.go:64] FLAG: --tls-min-version="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090167 4698 flags.go:64] FLAG: --tls-private-key-file="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090173 4698 flags.go:64] FLAG: --topology-manager-policy="none" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090179 4698 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090185 4698 flags.go:64] FLAG: --topology-manager-scope="container" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090191 4698 flags.go:64] FLAG: --v="2" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090199 4698 flags.go:64] FLAG: --version="false" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090207 4698 flags.go:64] FLAG: --vmodule="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090216 4698 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090222 4698 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090380 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090388 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090395 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090402 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090407 4698 feature_gate.go:330] unrecognized feature gate: Example Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090413 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090419 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090425 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090431 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090436 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090442 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090447 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090453 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090458 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090463 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090469 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090474 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090480 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090485 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090491 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090496 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090502 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090508 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090513 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090519 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090524 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090530 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090535 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090540 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090547 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090554 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090559 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090565 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090570 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090575 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090580 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090588 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090594 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090601 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090608 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090615 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090622 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090628 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090633 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090638 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090644 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090649 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090654 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090659 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090664 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090669 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090675 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090680 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090685 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090691 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090697 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090702 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090707 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090712 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090717 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090722 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090728 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090734 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090739 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090745 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090750 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090755 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090760 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090767 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090773 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.090779 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.090796 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.104229 4698 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.104302 4698 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104471 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104487 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104496 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104506 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104517 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104527 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104538 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104549 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104557 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104566 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104574 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104581 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104589 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104597 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104606 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104617 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104628 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104638 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104648 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104658 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104669 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104677 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104685 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104694 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104702 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104710 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104718 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104726 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104733 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104741 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104749 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104759 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104767 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104775 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104785 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104794 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104802 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104809 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104817 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104825 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104874 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104882 4698 feature_gate.go:330] unrecognized feature gate: Example Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104889 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104898 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104906 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104914 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104922 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104933 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104943 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104951 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104960 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104972 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104982 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104990 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.104998 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105006 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105051 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105059 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105067 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105075 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105083 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105091 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105099 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105111 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105121 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105131 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105140 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105149 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105157 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105165 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105176 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.105192 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105446 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105463 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105471 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105480 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105490 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105499 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105506 4698 feature_gate.go:330] unrecognized feature gate: Example Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105514 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105522 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105534 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105542 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105550 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105558 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105565 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105573 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105582 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105590 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105599 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105609 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105619 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105630 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105641 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105651 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105662 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105671 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105680 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105688 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105696 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105703 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105711 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105719 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105730 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105739 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105747 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105766 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105775 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105784 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105792 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105800 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105808 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105817 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105825 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105833 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105843 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105853 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105863 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105872 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105883 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105892 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105900 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105910 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105919 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105928 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105936 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105944 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105951 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105960 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105969 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105979 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105987 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.105994 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.106003 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.106043 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.106053 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.106063 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.106072 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.106081 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.106090 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.106098 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.106106 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.106116 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.106129 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.106469 4698 server.go:940] "Client rotation is on, will bootstrap in background" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.113851 4698 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.114080 4698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.116185 4698 server.go:997] "Starting client certificate rotation" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.116247 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.116427 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-16 23:51:17.917111649 +0000 UTC Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.116902 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 996h6m4.800214068s for next certificate rotation Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.150173 4698 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.153074 4698 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.171680 4698 log.go:25] "Validated CRI v1 runtime API" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.204522 4698 log.go:25] "Validated CRI v1 image API" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.206622 4698 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.211915 4698 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-06-11-40-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.211970 4698 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.232612 4698 manager.go:217] Machine: {Timestamp:2025-10-06 11:45:13.229790576 +0000 UTC m=+0.642482769 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f BootID:0861d471-78ee-41c9-b36d-d10e0af16681 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b1:4a:dc Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b1:4a:dc Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d5:48:99 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:70:7c:cf Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c9:08:77 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b1:bb:06 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:5e:b4:4f:da:b5:25 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ca:f8:c4:35:43:2e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.232862 4698 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.233094 4698 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.234593 4698 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.234775 4698 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.234836 4698 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.235085 4698 topology_manager.go:138] "Creating topology manager with none policy" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.235098 4698 container_manager_linux.go:303] "Creating device plugin manager" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.235598 4698 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.235624 4698 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.236455 4698 state_mem.go:36] "Initialized new in-memory state store" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.236542 4698 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.239762 4698 kubelet.go:418] "Attempting to sync node with API server" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.239783 4698 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.239798 4698 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.239810 4698 kubelet.go:324] "Adding apiserver pod source" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.239822 4698 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.244116 4698 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.245318 4698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.248123 4698 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.248116 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.248189 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.248346 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.248494 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.249824 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.249870 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.249887 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.249903 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.249930 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.249946 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.249962 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.249984 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.249999 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.250042 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.250076 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.250093 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.251099 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.252045 4698 server.go:1280] "Started kubelet" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.254205 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.254084 4698 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.254113 4698 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 06 11:45:13 crc systemd[1]: Started Kubernetes Kubelet. Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.255628 4698 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.258222 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.258304 4698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.258421 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:25:39.004241722 +0000 UTC Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.258521 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1321h40m25.745725044s for next certificate rotation Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.261877 4698 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.262175 4698 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.262228 4698 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.262492 4698 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.264496 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="200ms" Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.265217 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.265435 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.265541 4698 server.go:460] "Adding debug handlers to kubelet server" Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.265790 4698 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186be4505e7b50cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-06 11:45:13.251967181 +0000 UTC m=+0.664659394,LastTimestamp:2025-10-06 11:45:13.251967181 +0000 UTC m=+0.664659394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.268361 4698 factory.go:153] Registering CRI-O factory Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.268398 4698 factory.go:221] Registration of the crio container factory successfully Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.268498 4698 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.268512 4698 factory.go:55] Registering systemd factory Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.268524 4698 factory.go:221] Registration of the systemd container factory successfully Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.268556 4698 factory.go:103] Registering Raw factory Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.268766 4698 manager.go:1196] Started watching for new ooms in manager Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.271692 4698 manager.go:319] Starting recovery of all containers Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279110 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279195 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279212 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279229 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279251 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279268 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279284 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279305 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279325 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279343 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279359 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279380 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279397 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279414 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279451 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279466 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279481 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279522 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279543 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279560 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279577 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279593 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279609 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279625 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279638 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279654 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279675 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279692 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279709 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279726 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279773 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279796 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279818 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279834 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279869 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279886 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279902 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.279919 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.282898 4698 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.282969 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.282995 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283042 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283061 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283083 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283101 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283118 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283139 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283158 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283180 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283198 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283216 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283234 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283252 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283280 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283299 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283362 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283385 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283406 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283432 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283454 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283473 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283491 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283508 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283526 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283542 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283581 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283597 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283615 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283632 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283649 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283668 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283688 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283705 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283722 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283740 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283760 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283779 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283797 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283814 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283832 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283852 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283871 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283888 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283904 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283920 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283936 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283954 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283974 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.283991 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284007 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284055 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284073 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284090 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284106 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284125 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284145 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284170 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284186 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284203 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284220 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284237 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284255 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284272 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284292 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284312 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284346 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284366 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284384 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284403 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284421 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284440 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284460 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284477 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284495 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284513 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284531 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284549 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284565 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284582 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284598 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284647 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284666 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284687 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284706 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284723 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284741 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284761 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284782 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284797 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284818 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284841 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284857 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284875 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284893 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284915 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284930 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284945 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284963 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284976 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.284990 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285003 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285057 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285081 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285101 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285118 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285137 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285153 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285174 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285196 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285214 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285231 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285258 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285275 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285294 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285312 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285330 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285404 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285426 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285445 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285466 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285486 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285505 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285523 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285543 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285559 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285577 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285598 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285616 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285635 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285652 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285671 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285695 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285715 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285734 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285751 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285773 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285788 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285806 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285823 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285838 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285853 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285871 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285889 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.285906 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287436 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287488 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287516 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287537 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287559 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287580 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287605 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287630 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287654 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287675 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287697 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287719 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287741 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287765 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287830 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287853 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287874 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287896 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287919 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287940 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287961 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.287984 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.288006 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.288056 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.288074 4698 reconstruct.go:97] "Volume reconstruction finished" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.288089 4698 reconciler.go:26] "Reconciler: start to sync state" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.289969 4698 manager.go:324] Recovery completed Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.302562 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.304203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.304237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.304248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.305226 4698 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.305242 4698 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.305260 4698 state_mem.go:36] "Initialized new in-memory state store" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.323393 4698 policy_none.go:49] "None policy: Start" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.324337 4698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.324667 4698 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.324710 4698 state_mem.go:35] "Initializing new in-memory state store" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.327511 4698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.327575 4698 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.327618 4698 kubelet.go:2335] "Starting kubelet main sync loop" Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.327732 4698 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.328440 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.328514 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.362391 4698 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.389851 4698 manager.go:334] "Starting Device Plugin manager" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.390058 4698 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.390084 4698 server.go:79] "Starting device plugin registration server" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.391291 4698 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.391365 4698 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.391838 4698 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.392201 4698 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.392227 4698 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.402662 4698 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.429382 4698 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.429511 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.431261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.431325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.431344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.431547 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.432809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.432846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.432857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.433321 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.433549 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.433431 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.433341 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.433860 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.434773 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.434838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.434864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.435132 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.435335 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.435405 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436679 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436790 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.436812 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.437578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.437605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.437613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.437757 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.437800 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.437847 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.437881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.437893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.439061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.439089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.439130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.465252 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="400ms" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.490596 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.490741 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.490821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.490900 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.490947 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.490973 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.490998 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.491061 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.491134 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.491256 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.491332 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.491388 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.491508 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.491499 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.491652 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.491691 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.492911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.492953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.492964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.492989 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.493486 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.592898 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593447 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593506 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593550 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593592 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593604 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593629 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593668 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593675 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593704 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593725 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593750 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593737 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593740 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593817 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593861 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593875 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593938 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593952 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.593910 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.594074 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.594110 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.594112 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.594145 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.594176 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.594211 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.594248 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.594272 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.594315 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.594436 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.693673 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.696735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.696814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.696843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.696892 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.697526 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.762344 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.784236 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.798324 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.823129 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: I1006 11:45:13.831248 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.834667 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-e33deb9d5694a4407697276009b5e16f6d3de6f11843410f490215c03f07270c WatchSource:0}: Error finding container e33deb9d5694a4407697276009b5e16f6d3de6f11843410f490215c03f07270c: Status 404 returned error can't find the container with id e33deb9d5694a4407697276009b5e16f6d3de6f11843410f490215c03f07270c Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.837715 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-714af59d761c90ebd3f29ce35dbfcc8838504bb1e23bb5c155e276bfc7544b13 WatchSource:0}: Error finding container 714af59d761c90ebd3f29ce35dbfcc8838504bb1e23bb5c155e276bfc7544b13: Status 404 returned error can't find the container with id 714af59d761c90ebd3f29ce35dbfcc8838504bb1e23bb5c155e276bfc7544b13 Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.844125 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-72b621b9264e0832e0ac25836d5684571fb791bebad773b8eee38ea390bc8f03 WatchSource:0}: Error finding container 72b621b9264e0832e0ac25836d5684571fb791bebad773b8eee38ea390bc8f03: Status 404 returned error can't find the container with id 72b621b9264e0832e0ac25836d5684571fb791bebad773b8eee38ea390bc8f03 Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.850867 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-dde096660eeebbe617ccf4f3bf3d78a1af45fcd50a3c651bb982daa9d84efc73 WatchSource:0}: Error finding container dde096660eeebbe617ccf4f3bf3d78a1af45fcd50a3c651bb982daa9d84efc73: Status 404 returned error can't find the container with id dde096660eeebbe617ccf4f3bf3d78a1af45fcd50a3c651bb982daa9d84efc73 Oct 06 11:45:13 crc kubenswrapper[4698]: W1006 11:45:13.854876 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-37c9c0b3d445652f11a391a90f3969fd361a72b7197749439019afd3de4ad40f WatchSource:0}: Error finding container 37c9c0b3d445652f11a391a90f3969fd361a72b7197749439019afd3de4ad40f: Status 404 returned error can't find the container with id 37c9c0b3d445652f11a391a90f3969fd361a72b7197749439019afd3de4ad40f Oct 06 11:45:13 crc kubenswrapper[4698]: E1006 11:45:13.867255 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="800ms" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.098471 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.102551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.102614 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.102642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.102693 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:45:14 crc kubenswrapper[4698]: E1006 11:45:14.103540 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Oct 06 11:45:14 crc kubenswrapper[4698]: W1006 11:45:14.197372 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:14 crc kubenswrapper[4698]: E1006 11:45:14.197540 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.255370 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:14 crc kubenswrapper[4698]: W1006 11:45:14.311000 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:14 crc kubenswrapper[4698]: E1006 11:45:14.311164 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.332408 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"714af59d761c90ebd3f29ce35dbfcc8838504bb1e23bb5c155e276bfc7544b13"} Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.334184 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e33deb9d5694a4407697276009b5e16f6d3de6f11843410f490215c03f07270c"} Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.336062 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"37c9c0b3d445652f11a391a90f3969fd361a72b7197749439019afd3de4ad40f"} Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.338670 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dde096660eeebbe617ccf4f3bf3d78a1af45fcd50a3c651bb982daa9d84efc73"} Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.339835 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"72b621b9264e0832e0ac25836d5684571fb791bebad773b8eee38ea390bc8f03"} Oct 06 11:45:14 crc kubenswrapper[4698]: E1006 11:45:14.668237 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="1.6s" Oct 06 11:45:14 crc kubenswrapper[4698]: W1006 11:45:14.811676 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:14 crc kubenswrapper[4698]: E1006 11:45:14.811822 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:45:14 crc kubenswrapper[4698]: W1006 11:45:14.833726 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:14 crc kubenswrapper[4698]: E1006 11:45:14.833924 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.904043 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.906086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.906133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.906146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:14 crc kubenswrapper[4698]: I1006 11:45:14.906177 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:45:14 crc kubenswrapper[4698]: E1006 11:45:14.906812 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.255360 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.347107 4698 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d" exitCode=0 Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.347226 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d"} Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.347387 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.349621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.349754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.349776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.354765 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38"} Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.354860 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7"} Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.354897 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626"} Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.358552 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453" exitCode=0 Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.358618 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453"} Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.358765 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.361088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.361139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.361157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.363113 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0" exitCode=0 Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.363234 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0"} Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.363547 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.365098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.365146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.365167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.366930 4698 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d272e8233fc2a8cfb09be447adcaa1dcef994d8d25f094b839f02934a6b01989" exitCode=0 Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.367060 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d272e8233fc2a8cfb09be447adcaa1dcef994d8d25f094b839f02934a6b01989"} Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.367219 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.368863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.369070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.369226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.370479 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.372091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.372148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:15 crc kubenswrapper[4698]: I1006 11:45:15.372167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.255919 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:16 crc kubenswrapper[4698]: E1006 11:45:16.269663 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="3.2s" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.381692 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d"} Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.381842 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1"} Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.381876 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce"} Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.384689 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d3c92b853936feb3512efe9fd7d07aacc3495f7d64d4d9f6a73a5317b3613440"} Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.384785 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.386410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.386488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.386513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.391243 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654"} Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.391306 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14"} Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.391326 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12"} Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.391339 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.394052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.394100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.394121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.401901 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112"} Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.401959 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.403777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.403836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.403868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.404778 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e" exitCode=0 Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.404927 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e"} Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.404974 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.406327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.406372 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.406384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:16 crc kubenswrapper[4698]: W1006 11:45:16.457517 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Oct 06 11:45:16 crc kubenswrapper[4698]: E1006 11:45:16.457643 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.507308 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.509000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.509060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.509075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:16 crc kubenswrapper[4698]: I1006 11:45:16.509101 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:45:16 crc kubenswrapper[4698]: E1006 11:45:16.509764 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.411849 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28" exitCode=0 Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.411935 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28"} Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.412135 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.413770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.413822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.413850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.423234 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573"} Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.423342 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e"} Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.423359 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.423396 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.423293 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.423495 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.423535 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.425896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.425911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.425948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.425958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.425895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.425985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.425964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.426042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.426058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.425967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.425992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.426122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:17 crc kubenswrapper[4698]: I1006 11:45:17.736794 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.351115 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.436706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0"} Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.436787 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0"} Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.436811 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016"} Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.436835 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.436918 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.437058 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.438591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.438676 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.438700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.438901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.438970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:18 crc kubenswrapper[4698]: I1006 11:45:18.438990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.107984 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.450529 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47"} Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.450614 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31"} Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.450550 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.450697 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.450705 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.453113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.453162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.453183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.453244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.453325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.453369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.658588 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.710246 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.712954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.713144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.713293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:19 crc kubenswrapper[4698]: I1006 11:45:19.713430 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.144763 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.453796 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.453820 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.456755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.456962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.457223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.457381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.457432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.457448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.958375 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.959160 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.961375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.961439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.961451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:20 crc kubenswrapper[4698]: I1006 11:45:20.966959 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.457541 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.457541 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.458842 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.458965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.458999 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.459045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.460296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.460379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.460399 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.461364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.461444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:21 crc kubenswrapper[4698]: I1006 11:45:21.461472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:22 crc kubenswrapper[4698]: I1006 11:45:22.130954 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:22 crc kubenswrapper[4698]: I1006 11:45:22.461336 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:22 crc kubenswrapper[4698]: I1006 11:45:22.463340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:22 crc kubenswrapper[4698]: I1006 11:45:22.463415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:22 crc kubenswrapper[4698]: I1006 11:45:22.463437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:22 crc kubenswrapper[4698]: I1006 11:45:22.646095 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:23 crc kubenswrapper[4698]: E1006 11:45:23.402809 4698 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 06 11:45:23 crc kubenswrapper[4698]: I1006 11:45:23.465625 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:23 crc kubenswrapper[4698]: I1006 11:45:23.467298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:23 crc kubenswrapper[4698]: I1006 11:45:23.467357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:23 crc kubenswrapper[4698]: I1006 11:45:23.467370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:24 crc kubenswrapper[4698]: I1006 11:45:24.452992 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:24 crc kubenswrapper[4698]: I1006 11:45:24.468886 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:24 crc kubenswrapper[4698]: I1006 11:45:24.471625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:24 crc kubenswrapper[4698]: I1006 11:45:24.471675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:24 crc kubenswrapper[4698]: I1006 11:45:24.471689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:24 crc kubenswrapper[4698]: I1006 11:45:24.476639 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:25 crc kubenswrapper[4698]: I1006 11:45:25.476319 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:25 crc kubenswrapper[4698]: I1006 11:45:25.477654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:25 crc kubenswrapper[4698]: I1006 11:45:25.477789 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:25 crc kubenswrapper[4698]: I1006 11:45:25.477873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:26 crc kubenswrapper[4698]: I1006 11:45:26.322525 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 06 11:45:26 crc kubenswrapper[4698]: I1006 11:45:26.322913 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:26 crc kubenswrapper[4698]: I1006 11:45:26.324765 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:26 crc kubenswrapper[4698]: I1006 11:45:26.324832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:26 crc kubenswrapper[4698]: I1006 11:45:26.324845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:26 crc kubenswrapper[4698]: W1006 11:45:26.956967 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 06 11:45:26 crc kubenswrapper[4698]: I1006 11:45:26.957129 4698 trace.go:236] Trace[1230685234]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 11:45:16.956) (total time: 10000ms): Oct 06 11:45:26 crc kubenswrapper[4698]: Trace[1230685234]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:45:26.956) Oct 06 11:45:26 crc kubenswrapper[4698]: Trace[1230685234]: [10.00099388s] [10.00099388s] END Oct 06 11:45:26 crc kubenswrapper[4698]: E1006 11:45:26.957166 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 06 11:45:27 crc kubenswrapper[4698]: W1006 11:45:27.119884 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 06 11:45:27 crc kubenswrapper[4698]: I1006 11:45:27.120007 4698 trace.go:236] Trace[73210009]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 11:45:17.118) (total time: 10001ms): Oct 06 11:45:27 crc kubenswrapper[4698]: Trace[73210009]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:45:27.119) Oct 06 11:45:27 crc kubenswrapper[4698]: Trace[73210009]: [10.001337639s] [10.001337639s] END Oct 06 11:45:27 crc kubenswrapper[4698]: E1006 11:45:27.120055 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 06 11:45:27 crc kubenswrapper[4698]: I1006 11:45:27.256261 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 06 11:45:27 crc kubenswrapper[4698]: I1006 11:45:27.453830 4698 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 11:45:27 crc kubenswrapper[4698]: I1006 11:45:27.453981 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 11:45:28 crc kubenswrapper[4698]: W1006 11:45:28.001942 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 06 11:45:28 crc kubenswrapper[4698]: I1006 11:45:28.002061 4698 trace.go:236] Trace[916418161]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 11:45:18.000) (total time: 10001ms): Oct 06 11:45:28 crc kubenswrapper[4698]: Trace[916418161]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:45:28.001) Oct 06 11:45:28 crc kubenswrapper[4698]: Trace[916418161]: [10.001643846s] [10.001643846s] END Oct 06 11:45:28 crc kubenswrapper[4698]: E1006 11:45:28.002084 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 06 11:45:28 crc kubenswrapper[4698]: I1006 11:45:28.199925 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 06 11:45:28 crc kubenswrapper[4698]: I1006 11:45:28.200067 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 06 11:45:28 crc kubenswrapper[4698]: I1006 11:45:28.208186 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 06 11:45:28 crc kubenswrapper[4698]: I1006 11:45:28.208258 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 06 11:45:28 crc kubenswrapper[4698]: I1006 11:45:28.358552 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]log ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]etcd ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/generic-apiserver-start-informers ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/priority-and-fairness-filter ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/start-apiextensions-informers ok Oct 06 11:45:28 crc kubenswrapper[4698]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Oct 06 11:45:28 crc kubenswrapper[4698]: [-]poststarthook/crd-informer-synced failed: reason withheld Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/start-system-namespaces-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 06 11:45:28 crc kubenswrapper[4698]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 06 11:45:28 crc kubenswrapper[4698]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/bootstrap-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/start-kube-aggregator-informers ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/apiservice-registration-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/apiservice-discovery-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]autoregister-completion ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/apiservice-openapi-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 06 11:45:28 crc kubenswrapper[4698]: livez check failed Oct 06 11:45:28 crc kubenswrapper[4698]: I1006 11:45:28.359613 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.138482 4698 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.253323 4698 apiserver.go:52] "Watching apiserver" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.260269 4698 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.260555 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.261100 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.261301 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.261324 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:32 crc kubenswrapper[4698]: E1006 11:45:32.261370 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.261422 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:32 crc kubenswrapper[4698]: E1006 11:45:32.261461 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.261595 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:45:32 crc kubenswrapper[4698]: E1006 11:45:32.261693 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.261886 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.263597 4698 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.265279 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.265923 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.265954 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.265968 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.266147 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.267438 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.267537 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.267628 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.268329 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.299954 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.326467 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.339583 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.358810 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.374407 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.391607 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.435263 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.451745 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.484386 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:32 crc kubenswrapper[4698]: I1006 11:45:32.581450 4698 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.190770 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.204633 4698 trace.go:236] Trace[579116232]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 11:45:21.746) (total time: 11458ms): Oct 06 11:45:33 crc kubenswrapper[4698]: Trace[579116232]: ---"Objects listed" error: 11458ms (11:45:33.204) Oct 06 11:45:33 crc kubenswrapper[4698]: Trace[579116232]: [11.458456468s] [11.458456468s] END Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.204990 4698 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.208161 4698 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.216804 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.292807 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56680->192.168.126.11:17697: read: connection reset by peer" start-of-body= Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.293506 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56680->192.168.126.11:17697: read: connection reset by peer" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308545 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308583 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308602 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308617 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308635 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308654 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308701 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308725 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308765 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308781 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308800 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308843 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308861 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308878 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308895 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308929 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308945 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308942 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.308969 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309024 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309072 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309105 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309143 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309155 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309284 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309454 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309497 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309506 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309551 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309567 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309724 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309747 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309728 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309834 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309905 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309957 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.309985 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310064 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310109 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310146 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310176 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310202 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310270 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310298 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310244 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310323 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310251 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310331 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310289 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310358 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310326 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310385 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310425 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310473 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310503 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310529 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310555 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310581 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310606 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310631 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310654 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310658 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310671 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310678 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310690 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310708 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310741 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310810 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310835 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310857 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310885 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310909 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310934 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310960 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310983 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311007 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311049 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311072 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311095 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311119 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311142 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311164 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311194 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311216 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311239 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311261 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311284 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310711 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310885 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310904 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310912 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.310917 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311054 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.311297 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.312497 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.312789 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313112 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315590 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315631 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315656 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315685 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315707 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315756 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315787 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315813 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315835 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315887 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315916 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315587 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313199 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313195 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313282 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313383 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313449 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313567 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313685 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313774 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313796 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.313968 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.314067 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.314091 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.314393 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.314557 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.314655 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315227 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315386 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315741 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315754 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.315823 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.315952 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:45:33.815929916 +0000 UTC m=+21.228622089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316471 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316481 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316486 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316540 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316565 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316600 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316626 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316647 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316667 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316687 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316207 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316714 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316738 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316762 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316783 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316809 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316832 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316854 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316876 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316896 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316919 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316939 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317094 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317126 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317148 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317182 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317204 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317226 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317247 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317268 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317287 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317309 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317330 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317352 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317375 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317394 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317415 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317434 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317452 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317472 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317492 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317513 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317532 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317554 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317572 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317593 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317614 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317634 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317656 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317678 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317704 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317724 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317744 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317763 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317786 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317813 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317831 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317851 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317872 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317891 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317909 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317931 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317948 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317964 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317982 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318076 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318122 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318154 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318185 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318213 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318243 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318275 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318305 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318332 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318359 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318393 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318421 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318456 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318484 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318509 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318538 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318563 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318587 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318614 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318640 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318665 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318690 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318722 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318754 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318784 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318819 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318850 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318877 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318905 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318932 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318959 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318987 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319033 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319059 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319083 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319107 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319135 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319162 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316712 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319189 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319214 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319241 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319270 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319293 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319317 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319343 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319370 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319392 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319417 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319440 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319467 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319520 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319547 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319570 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319589 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319608 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319625 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319645 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319663 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319680 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319697 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319751 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319789 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319819 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319848 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319876 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319903 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319930 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319965 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319993 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320066 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320092 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320117 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320151 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321473 4698 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321496 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321515 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321530 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321544 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321560 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321574 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321592 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321605 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322251 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322274 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322288 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322301 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322314 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322326 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322338 4698 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322351 4698 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322363 4698 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322376 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322388 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322401 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322414 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322426 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322438 4698 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322450 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322562 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322579 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322593 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322607 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322619 4698 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322632 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322644 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322656 4698 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322669 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322681 4698 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322693 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322705 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322718 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322728 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322740 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322752 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322764 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322778 4698 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322791 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322804 4698 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322816 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322827 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322839 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322851 4698 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322863 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322875 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322887 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322899 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322912 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322922 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322934 4698 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322946 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322959 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322972 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322984 4698 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316616 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316868 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.316946 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317115 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317128 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317193 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317274 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317335 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317527 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317588 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317757 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317766 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.317971 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318126 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318229 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318349 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318181 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318386 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318367 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318503 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318704 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318748 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318742 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318922 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.318933 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319117 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319213 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319334 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.319443 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320102 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320121 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320173 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320683 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320712 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320759 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.320835 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321083 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.321332 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322840 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.322876 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.324002 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.324003 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.324058 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.324104 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.324707 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.324732 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.324728 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.324835 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325037 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325199 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325254 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325325 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325347 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325341 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325467 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325662 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325725 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325801 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.325921 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.327188 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.327226 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.327242 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.327329 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.327541 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.327655 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.327956 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.328296 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.328443 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.328590 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.328677 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.328966 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.329182 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.329253 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.329508 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.329604 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.329747 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.329947 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.330033 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.330365 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.330412 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.330801 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.330804 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.330926 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.331039 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.331027 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.331408 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.331460 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.331528 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.331821 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.332289 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.332476 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.334656 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.335036 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.335672 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.336152 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.335303 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.336367 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.337056 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.337087 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.337399 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.337698 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.337777 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.337800 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:33.837775526 +0000 UTC m=+21.250467779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.338091 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.338423 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.338661 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.338678 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.339004 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.339312 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.339483 4698 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.340026 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.340128 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:33.840100893 +0000 UTC m=+21.252793066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.340398 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.340602 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.340685 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.340868 4698 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.341374 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.342389 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.342440 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.342700 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.343045 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.343145 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.343596 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.344368 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.345994 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.346128 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.346953 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.349134 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.349478 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.350284 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.356001 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.356055 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.356074 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.356203 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:33.856148302 +0000 UTC m=+21.268840465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.356310 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.356350 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.359525 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.359798 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.359864 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.359965 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.360748 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.362117 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.362221 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.362446 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.362593 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.363069 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.363109 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.363625 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.363777 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.364339 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.365886 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.365998 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.366057 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.366517 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.366539 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.366552 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.366607 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:33.866588676 +0000 UTC m=+21.279280849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.366701 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.367434 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.371726 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.372547 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.373165 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.373215 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.374126 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.388174 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.389648 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.396167 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.405067 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.413467 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.423248 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.423700 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.423855 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.423789 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.423914 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424047 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424419 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424440 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424455 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424470 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424490 4698 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424524 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424540 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424556 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424568 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424580 4698 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424592 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424603 4698 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424613 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424625 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424635 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424645 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424656 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424671 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424684 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424697 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424710 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424723 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424735 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424751 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424764 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424778 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424792 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424804 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424820 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424834 4698 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424848 4698 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424861 4698 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424876 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424889 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424901 4698 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424916 4698 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424930 4698 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424944 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424959 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424972 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424985 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.424999 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425030 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425045 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425058 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425070 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425082 4698 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425096 4698 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425300 4698 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425316 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425332 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425351 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425369 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425382 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425395 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425416 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425440 4698 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425776 4698 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425790 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425802 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425812 4698 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425826 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425839 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425853 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425863 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425873 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425882 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.425891 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426320 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426330 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426339 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426365 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426374 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426385 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426397 4698 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426409 4698 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426423 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426435 4698 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426447 4698 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.426458 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427201 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427215 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427227 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427243 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427255 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427268 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427279 4698 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427292 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427305 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427318 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427331 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427343 4698 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427355 4698 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427367 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427378 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427390 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427402 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427415 4698 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427427 4698 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427438 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427450 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427471 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427483 4698 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427494 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427506 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427518 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427531 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427543 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427555 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427567 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427579 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427591 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427604 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427615 4698 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427627 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427638 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427649 4698 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427662 4698 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427675 4698 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427687 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427699 4698 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427713 4698 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427726 4698 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427738 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427749 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427760 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427772 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427785 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427799 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.427815 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.429032 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.429938 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.438029 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.438817 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.439848 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.451981 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.454361 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.464563 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.485399 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.502158 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.502360 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.505125 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e" exitCode=255 Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.514408 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 11:45:33 crc kubenswrapper[4698]: W1006 11:45:33.521599 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7483ba7ec7e5503d3561812d52c35d4ba821ccb71592025a90dd394d0caf79c3 WatchSource:0}: Error finding container 7483ba7ec7e5503d3561812d52c35d4ba821ccb71592025a90dd394d0caf79c3: Status 404 returned error can't find the container with id 7483ba7ec7e5503d3561812d52c35d4ba821ccb71592025a90dd394d0caf79c3 Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.527137 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.537093 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.537866 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.538528 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.539577 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.540908 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.541870 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.546295 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.547385 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.551991 4698 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.552198 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.570230 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.570288 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.593608 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.594961 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.595747 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.596203 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.599054 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.600930 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.601828 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.603630 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.604752 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.605803 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.607497 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.608337 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.609598 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.610218 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.611522 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.612282 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.614214 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.614448 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.614968 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.615974 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.616570 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.618749 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.619507 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.620113 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.623429 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e"} Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.623591 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.633351 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.633377 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.633392 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.640112 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.647006 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.659919 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.670151 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.680279 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.689991 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.700121 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.710783 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.727148 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.741081 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.755004 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.765618 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.775952 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.788743 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.789132 4698 scope.go:117] "RemoveContainer" containerID="38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.835134 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.835332 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:45:34.835302129 +0000 UTC m=+22.247994302 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.936603 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.936656 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.936687 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:33 crc kubenswrapper[4698]: I1006 11:45:33.936709 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936773 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936795 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936806 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936856 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:34.936842455 +0000 UTC m=+22.349534628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936857 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936900 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:34.936891586 +0000 UTC m=+22.349583759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936902 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936938 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:34.936929037 +0000 UTC m=+22.349621210 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936804 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936962 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936971 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:33 crc kubenswrapper[4698]: E1006 11:45:33.936998 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:34.936990039 +0000 UTC m=+22.349682222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.328142 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.328214 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.328227 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.328777 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.329300 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.329366 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.402006 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-x762x"] Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.402690 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-x762x" Oct 06 11:45:34 crc kubenswrapper[4698]: W1006 11:45:34.404735 4698 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.404811 4698 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.405124 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dxgjr"] Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.406375 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-7mj8x"] Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.406578 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.407200 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.408205 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: W1006 11:45:34.412285 4698 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.412336 4698 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.413911 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4f8bs"] Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.414171 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.414246 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.414467 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.414534 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.416296 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.417196 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.417325 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sz4ws"] Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.417521 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.419915 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.420427 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.420500 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.425642 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.425782 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.427711 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.427970 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.432337 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.432656 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.432976 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.434154 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.434223 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.434405 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.437381 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.439906 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.464423 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.469167 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.473925 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.477264 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.500567 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.510527 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb"} Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.510579 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0a676ef549940cc62d8ecafa7572c08012c9f71a7603e84c722f1f61bf300d8a"} Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.511836 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7483ba7ec7e5503d3561812d52c35d4ba821ccb71592025a90dd394d0caf79c3"} Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.514131 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.516155 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5"} Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.516353 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.516437 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.517990 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41"} Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.518041 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd"} Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.518058 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3cfaa3de4009ae59333f42290360219810e1b4d5ffdd712eab46fa0381a270fa"} Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.526634 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.531308 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.540944 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-env-overrides\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.540983 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541038 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-netns\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541065 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-cni-dir\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541104 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-systemd-units\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541128 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-ovn-kubernetes\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541154 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-var-lib-cni-multus\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541180 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl67j\" (UniqueName: \"kubernetes.io/projected/d89609a5-c527-41c2-a78b-e3dbc6ce8819-kube-api-access-tl67j\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541206 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-node-log\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541228 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-script-lib\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541255 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-daemon-config\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541280 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-slash\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541304 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-os-release\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541330 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hw8p\" (UniqueName: \"kubernetes.io/projected/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-kube-api-access-5hw8p\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541358 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-kubelet\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541384 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-openvswitch\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541408 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-var-lib-kubelet\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541461 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d89609a5-c527-41c2-a78b-e3dbc6ce8819-cni-binary-copy\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541494 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r458j\" (UniqueName: \"kubernetes.io/projected/50439b92-052f-4198-bff0-e5d256bf46b1-kube-api-access-r458j\") pod \"node-resolver-x762x\" (UID: \"50439b92-052f-4198-bff0-e5d256bf46b1\") " pod="openshift-dns/node-resolver-x762x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541543 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-run-multus-certs\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541588 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-rootfs\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541613 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-netd\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541641 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-cnibin\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541664 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-socket-dir-parent\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541691 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-run-netns\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541720 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-hostroot\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541749 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-cnibin\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541776 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d89609a5-c527-41c2-a78b-e3dbc6ce8819-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541864 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-log-socket\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541920 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtv5j\" (UniqueName: \"kubernetes.io/projected/c16ee453-14bb-4f57-addd-3fc27cb739de-kube-api-access-gtv5j\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541947 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/50439b92-052f-4198-bff0-e5d256bf46b1-hosts-file\") pod \"node-resolver-x762x\" (UID: \"50439b92-052f-4198-bff0-e5d256bf46b1\") " pod="openshift-dns/node-resolver-x762x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541972 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-cni-binary-copy\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.541995 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-var-lib-openvswitch\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542027 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-etc-openvswitch\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542045 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-bin\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542067 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542096 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-config\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542227 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-system-cni-dir\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542280 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c16ee453-14bb-4f57-addd-3fc27cb739de-ovn-node-metrics-cert\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542304 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-proxy-tls\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542323 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-mcd-auth-proxy-config\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542341 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-system-cni-dir\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542402 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-systemd\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-ovn\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542441 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-etc-kubernetes\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542458 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-os-release\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542545 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-run-k8s-cni-cncf-io\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542597 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-var-lib-cni-bin\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542627 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-conf-dir\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.542670 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffx7k\" (UniqueName: \"kubernetes.io/projected/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-kube-api-access-ffx7k\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.544859 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.557162 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.570677 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.581360 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.604894 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.641338 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643608 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643653 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-config\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643675 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-system-cni-dir\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643696 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-var-lib-openvswitch\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643717 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-etc-openvswitch\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643737 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-bin\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643754 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c16ee453-14bb-4f57-addd-3fc27cb739de-ovn-node-metrics-cert\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643774 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-proxy-tls\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643796 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-mcd-auth-proxy-config\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643814 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-system-cni-dir\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643832 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-systemd\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643848 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-ovn\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643867 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-etc-kubernetes\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643854 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-var-lib-openvswitch\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643951 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-etc-openvswitch\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643947 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-system-cni-dir\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643999 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-bin\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.643885 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-os-release\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644143 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-var-lib-cni-bin\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644166 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-conf-dir\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644190 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffx7k\" (UniqueName: \"kubernetes.io/projected/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-kube-api-access-ffx7k\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644225 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-run-k8s-cni-cncf-io\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644243 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-env-overrides\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644257 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644274 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-cni-dir\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644310 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-netns\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644326 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-var-lib-cni-multus\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644359 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-systemd-units\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644374 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-ovn-kubernetes\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644390 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl67j\" (UniqueName: \"kubernetes.io/projected/d89609a5-c527-41c2-a78b-e3dbc6ce8819-kube-api-access-tl67j\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644408 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-daemon-config\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644428 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-node-log\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644450 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-script-lib\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644454 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-os-release\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644472 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hw8p\" (UniqueName: \"kubernetes.io/projected/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-kube-api-access-5hw8p\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644552 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-slash\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644581 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-os-release\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644605 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-var-lib-kubelet\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644654 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d89609a5-c527-41c2-a78b-e3dbc6ce8819-cni-binary-copy\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644679 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-kubelet\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644699 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-openvswitch\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644740 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r458j\" (UniqueName: \"kubernetes.io/projected/50439b92-052f-4198-bff0-e5d256bf46b1-kube-api-access-r458j\") pod \"node-resolver-x762x\" (UID: \"50439b92-052f-4198-bff0-e5d256bf46b1\") " pod="openshift-dns/node-resolver-x762x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644761 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-run-multus-certs\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644783 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-rootfs\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644805 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-socket-dir-parent\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644825 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-run-netns\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644846 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-hostroot\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644865 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-cnibin\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644900 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-netd\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-cnibin\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644940 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d89609a5-c527-41c2-a78b-e3dbc6ce8819-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644942 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-openvswitch\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.644974 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/50439b92-052f-4198-bff0-e5d256bf46b1-hosts-file\") pod \"node-resolver-x762x\" (UID: \"50439b92-052f-4198-bff0-e5d256bf46b1\") " pod="openshift-dns/node-resolver-x762x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645029 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/50439b92-052f-4198-bff0-e5d256bf46b1-hosts-file\") pod \"node-resolver-x762x\" (UID: \"50439b92-052f-4198-bff0-e5d256bf46b1\") " pod="openshift-dns/node-resolver-x762x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645065 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-cni-binary-copy\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645099 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-var-lib-cni-bin\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645186 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-kubelet\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645200 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-conf-dir\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645280 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-system-cni-dir\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645367 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-ovn\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645430 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-netns\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645484 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-var-lib-cni-multus\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645537 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-config\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645577 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-rootfs\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645556 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-run-multus-certs\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645673 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-socket-dir-parent\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645739 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-log-socket\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645765 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtv5j\" (UniqueName: \"kubernetes.io/projected/c16ee453-14bb-4f57-addd-3fc27cb739de-kube-api-access-gtv5j\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-run-netns\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645868 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-hostroot\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-mcd-auth-proxy-config\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645887 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-cnibin\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.645943 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-run-k8s-cni-cncf-io\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646061 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-systemd-units\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646091 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-ovn-kubernetes\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646313 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-cni-dir\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646349 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-env-overrides\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646396 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-node-log\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646586 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-cni-binary-copy\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646790 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-multus-daemon-config\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646805 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-slash\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646792 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-netd\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646821 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-host-var-lib-kubelet\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646832 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-systemd\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646852 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-cnibin\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646893 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-script-lib\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646924 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-etc-kubernetes\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646940 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-os-release\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646961 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-log-socket\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.646971 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d89609a5-c527-41c2-a78b-e3dbc6ce8819-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.647350 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d89609a5-c527-41c2-a78b-e3dbc6ce8819-cni-binary-copy\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.647598 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d89609a5-c527-41c2-a78b-e3dbc6ce8819-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.647655 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.658794 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-proxy-tls\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.659211 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c16ee453-14bb-4f57-addd-3fc27cb739de-ovn-node-metrics-cert\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.672579 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtv5j\" (UniqueName: \"kubernetes.io/projected/c16ee453-14bb-4f57-addd-3fc27cb739de-kube-api-access-gtv5j\") pod \"ovnkube-node-sz4ws\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.675073 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.675564 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hw8p\" (UniqueName: \"kubernetes.io/projected/e581ae92-9ea3-40a6-abd4-09eb81bb5be4-kube-api-access-5hw8p\") pod \"multus-4f8bs\" (UID: \"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\") " pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.680554 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl67j\" (UniqueName: \"kubernetes.io/projected/d89609a5-c527-41c2-a78b-e3dbc6ce8819-kube-api-access-tl67j\") pod \"multus-additional-cni-plugins-dxgjr\" (UID: \"d89609a5-c527-41c2-a78b-e3dbc6ce8819\") " pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.690668 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.690827 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffx7k\" (UniqueName: \"kubernetes.io/projected/490a89c4-aeb3-4c8f-bdfb-c36f7fc40209-kube-api-access-ffx7k\") pod \"machine-config-daemon-7mj8x\" (UID: \"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\") " pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.706063 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.717803 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.730433 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.736042 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.746151 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.746159 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.754054 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4f8bs" Oct 06 11:45:34 crc kubenswrapper[4698]: W1006 11:45:34.759086 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod490a89c4_aeb3_4c8f_bdfb_c36f7fc40209.slice/crio-6ab1c8f3bc3cdd4271fb5e0217def51597695174d4d1748baa0df3a3ca7bc07e WatchSource:0}: Error finding container 6ab1c8f3bc3cdd4271fb5e0217def51597695174d4d1748baa0df3a3ca7bc07e: Status 404 returned error can't find the container with id 6ab1c8f3bc3cdd4271fb5e0217def51597695174d4d1748baa0df3a3ca7bc07e Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.760081 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.761065 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.772055 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.787419 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:34Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:34 crc kubenswrapper[4698]: W1006 11:45:34.798899 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc16ee453_14bb_4f57_addd_3fc27cb739de.slice/crio-c4485aa5a84e67954bcc3496b5baf885e13b244e38470ec6802e752acac1e4e3 WatchSource:0}: Error finding container c4485aa5a84e67954bcc3496b5baf885e13b244e38470ec6802e752acac1e4e3: Status 404 returned error can't find the container with id c4485aa5a84e67954bcc3496b5baf885e13b244e38470ec6802e752acac1e4e3 Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.849989 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.850345 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:45:36.850310209 +0000 UTC m=+24.263002382 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.950727 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.950781 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.950813 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:34 crc kubenswrapper[4698]: I1006 11:45:34.950836 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.950924 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.950962 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.950998 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.951040 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:36.950999142 +0000 UTC m=+24.363691315 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.951043 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.951062 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.951063 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:36.951054433 +0000 UTC m=+24.363746606 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.951128 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:36.951110884 +0000 UTC m=+24.363803047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.951175 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.951218 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.951231 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:34 crc kubenswrapper[4698]: E1006 11:45:34.951314 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:36.9512917 +0000 UTC m=+24.363983873 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.333031 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.334220 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.335197 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.523276 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f8bs" event={"ID":"e581ae92-9ea3-40a6-abd4-09eb81bb5be4","Type":"ContainerStarted","Data":"ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696"} Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.523332 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f8bs" event={"ID":"e581ae92-9ea3-40a6-abd4-09eb81bb5be4","Type":"ContainerStarted","Data":"4c3a84fe616dbaa2469acc7f1fd8f56c481c5281a17458829ce46cbdff8b17ae"} Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.525227 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17" exitCode=0 Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.525303 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17"} Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.525360 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"c4485aa5a84e67954bcc3496b5baf885e13b244e38470ec6802e752acac1e4e3"} Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.528006 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679"} Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.528170 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b"} Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.528195 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"6ab1c8f3bc3cdd4271fb5e0217def51597695174d4d1748baa0df3a3ca7bc07e"} Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.530811 4698 generic.go:334] "Generic (PLEG): container finished" podID="d89609a5-c527-41c2-a78b-e3dbc6ce8819" containerID="52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250" exitCode=0 Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.530931 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" event={"ID":"d89609a5-c527-41c2-a78b-e3dbc6ce8819","Type":"ContainerDied","Data":"52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250"} Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.530990 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" event={"ID":"d89609a5-c527-41c2-a78b-e3dbc6ce8819","Type":"ContainerStarted","Data":"d900fbea0c10f0823b1d774e1d6076b7092cdaf230e1426a59fabe288a710794"} Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.551383 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.564279 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.591261 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.619096 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.635379 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.652503 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.666374 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: E1006 11:45:35.673284 4698 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 06 11:45:35 crc kubenswrapper[4698]: E1006 11:45:35.673343 4698 projected.go:194] Error preparing data for projected volume kube-api-access-r458j for pod openshift-dns/node-resolver-x762x: failed to sync configmap cache: timed out waiting for the condition Oct 06 11:45:35 crc kubenswrapper[4698]: E1006 11:45:35.673419 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50439b92-052f-4198-bff0-e5d256bf46b1-kube-api-access-r458j podName:50439b92-052f-4198-bff0-e5d256bf46b1 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:36.173385132 +0000 UTC m=+23.586077305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r458j" (UniqueName: "kubernetes.io/projected/50439b92-052f-4198-bff0-e5d256bf46b1-kube-api-access-r458j") pod "node-resolver-x762x" (UID: "50439b92-052f-4198-bff0-e5d256bf46b1") : failed to sync configmap cache: timed out waiting for the condition Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.691598 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.720341 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.741411 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.763245 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.785107 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.808319 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.830440 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.863700 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.892777 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.920596 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.948678 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.954177 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.963163 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.968086 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.980991 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:35 crc kubenswrapper[4698]: I1006 11:45:35.995442 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:35Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.010565 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.032841 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.049540 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.065784 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.081911 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.266542 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r458j\" (UniqueName: \"kubernetes.io/projected/50439b92-052f-4198-bff0-e5d256bf46b1-kube-api-access-r458j\") pod \"node-resolver-x762x\" (UID: \"50439b92-052f-4198-bff0-e5d256bf46b1\") " pod="openshift-dns/node-resolver-x762x" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.276106 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r458j\" (UniqueName: \"kubernetes.io/projected/50439b92-052f-4198-bff0-e5d256bf46b1-kube-api-access-r458j\") pod \"node-resolver-x762x\" (UID: \"50439b92-052f-4198-bff0-e5d256bf46b1\") " pod="openshift-dns/node-resolver-x762x" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.328105 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.328168 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.328234 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.328253 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.328392 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.328487 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.357157 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.379473 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.389819 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.391164 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.415584 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.437995 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.453332 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.470195 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.489313 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.507172 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.524712 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.530962 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-x762x" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.539568 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0"} Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.539649 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910"} Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.539664 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22"} Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.539675 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8"} Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.542350 4698 generic.go:334] "Generic (PLEG): container finished" podID="d89609a5-c527-41c2-a78b-e3dbc6ce8819" containerID="009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d" exitCode=0 Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.542399 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" event={"ID":"d89609a5-c527-41c2-a78b-e3dbc6ce8819","Type":"ContainerDied","Data":"009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d"} Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.543827 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58"} Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.547749 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.561699 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.577485 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.592202 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.610464 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: W1006 11:45:36.615865 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50439b92_052f_4198_bff0_e5d256bf46b1.slice/crio-5957884e4ea3390c5e4867420287b3baedca2a95ff4b06eab4a67b00aa819c16 WatchSource:0}: Error finding container 5957884e4ea3390c5e4867420287b3baedca2a95ff4b06eab4a67b00aa819c16: Status 404 returned error can't find the container with id 5957884e4ea3390c5e4867420287b3baedca2a95ff4b06eab4a67b00aa819c16 Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.627470 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.642297 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.655675 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.671095 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.686194 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.711300 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.733989 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.748319 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.766027 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.785527 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.807366 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.821862 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.834440 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.858917 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:36Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.873972 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.874577 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:45:40.874553406 +0000 UTC m=+28.287245569 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.974904 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.974958 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.974988 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:36 crc kubenswrapper[4698]: I1006 11:45:36.975027 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975130 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975190 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:40.975176348 +0000 UTC m=+28.387868521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975277 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975323 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975356 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975469 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:40.975441544 +0000 UTC m=+28.388133897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975489 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975286 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975584 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:40.975559347 +0000 UTC m=+28.388251540 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975613 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975633 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:36 crc kubenswrapper[4698]: E1006 11:45:36.975677 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:40.97566886 +0000 UTC m=+28.388361033 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.552712 4698 generic.go:334] "Generic (PLEG): container finished" podID="d89609a5-c527-41c2-a78b-e3dbc6ce8819" containerID="1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18" exitCode=0 Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.552810 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" event={"ID":"d89609a5-c527-41c2-a78b-e3dbc6ce8819","Type":"ContainerDied","Data":"1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18"} Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.558281 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3"} Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.558355 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93"} Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.561227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-x762x" event={"ID":"50439b92-052f-4198-bff0-e5d256bf46b1","Type":"ContainerStarted","Data":"a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa"} Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.561318 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-x762x" event={"ID":"50439b92-052f-4198-bff0-e5d256bf46b1","Type":"ContainerStarted","Data":"5957884e4ea3390c5e4867420287b3baedca2a95ff4b06eab4a67b00aa819c16"} Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.570776 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.584786 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.614602 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.649841 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.665640 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.682989 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.711267 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.731584 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.763784 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.777703 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.797226 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.812913 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.828785 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.847142 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.863905 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.884766 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.903656 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.920266 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.946428 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.947599 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-5tqfs"] Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.948411 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.951658 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.954293 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.954837 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.954943 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.980945 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:37 crc kubenswrapper[4698]: I1006 11:45:37.998704 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:37Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.019799 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.034875 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.054031 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.070978 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.087685 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3afedf6c-a96a-4c64-b3b7-411361950f7c-host\") pod \"node-ca-5tqfs\" (UID: \"3afedf6c-a96a-4c64-b3b7-411361950f7c\") " pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.087736 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3afedf6c-a96a-4c64-b3b7-411361950f7c-serviceca\") pod \"node-ca-5tqfs\" (UID: \"3afedf6c-a96a-4c64-b3b7-411361950f7c\") " pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.087855 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btwf6\" (UniqueName: \"kubernetes.io/projected/3afedf6c-a96a-4c64-b3b7-411361950f7c-kube-api-access-btwf6\") pod \"node-ca-5tqfs\" (UID: \"3afedf6c-a96a-4c64-b3b7-411361950f7c\") " pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.108039 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.126157 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.142664 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.157243 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.177791 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.189197 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btwf6\" (UniqueName: \"kubernetes.io/projected/3afedf6c-a96a-4c64-b3b7-411361950f7c-kube-api-access-btwf6\") pod \"node-ca-5tqfs\" (UID: \"3afedf6c-a96a-4c64-b3b7-411361950f7c\") " pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.189291 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3afedf6c-a96a-4c64-b3b7-411361950f7c-host\") pod \"node-ca-5tqfs\" (UID: \"3afedf6c-a96a-4c64-b3b7-411361950f7c\") " pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.189343 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3afedf6c-a96a-4c64-b3b7-411361950f7c-serviceca\") pod \"node-ca-5tqfs\" (UID: \"3afedf6c-a96a-4c64-b3b7-411361950f7c\") " pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.189545 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3afedf6c-a96a-4c64-b3b7-411361950f7c-host\") pod \"node-ca-5tqfs\" (UID: \"3afedf6c-a96a-4c64-b3b7-411361950f7c\") " pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.191230 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3afedf6c-a96a-4c64-b3b7-411361950f7c-serviceca\") pod \"node-ca-5tqfs\" (UID: \"3afedf6c-a96a-4c64-b3b7-411361950f7c\") " pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.195848 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.214707 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.214941 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btwf6\" (UniqueName: \"kubernetes.io/projected/3afedf6c-a96a-4c64-b3b7-411361950f7c-kube-api-access-btwf6\") pod \"node-ca-5tqfs\" (UID: \"3afedf6c-a96a-4c64-b3b7-411361950f7c\") " pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.234480 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.261433 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.269586 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5tqfs" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.277463 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: W1006 11:45:38.290425 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3afedf6c_a96a_4c64_b3b7_411361950f7c.slice/crio-d395282ca0e7b0c6293671fe24ab79b9535f890ffd4720799e24a29997156721 WatchSource:0}: Error finding container d395282ca0e7b0c6293671fe24ab79b9535f890ffd4720799e24a29997156721: Status 404 returned error can't find the container with id d395282ca0e7b0c6293671fe24ab79b9535f890ffd4720799e24a29997156721 Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.298868 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.319311 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.328671 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.328698 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.328712 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:38 crc kubenswrapper[4698]: E1006 11:45:38.328910 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:38 crc kubenswrapper[4698]: E1006 11:45:38.329059 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:38 crc kubenswrapper[4698]: E1006 11:45:38.329298 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.344824 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.361469 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.384956 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.404125 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.421757 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.448783 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.567091 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5tqfs" event={"ID":"3afedf6c-a96a-4c64-b3b7-411361950f7c","Type":"ContainerStarted","Data":"d395282ca0e7b0c6293671fe24ab79b9535f890ffd4720799e24a29997156721"} Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.570299 4698 generic.go:334] "Generic (PLEG): container finished" podID="d89609a5-c527-41c2-a78b-e3dbc6ce8819" containerID="349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61" exitCode=0 Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.570339 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" event={"ID":"d89609a5-c527-41c2-a78b-e3dbc6ce8819","Type":"ContainerDied","Data":"349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61"} Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.593635 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.611181 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.636547 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.671672 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.697279 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.716603 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.738231 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.768007 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.788777 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.811382 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.829907 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.845373 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.860600 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.876219 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:38 crc kubenswrapper[4698]: I1006 11:45:38.892984 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:38Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.581576 4698 generic.go:334] "Generic (PLEG): container finished" podID="d89609a5-c527-41c2-a78b-e3dbc6ce8819" containerID="8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307" exitCode=0 Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.581696 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" event={"ID":"d89609a5-c527-41c2-a78b-e3dbc6ce8819","Type":"ContainerDied","Data":"8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.585151 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5tqfs" event={"ID":"3afedf6c-a96a-4c64-b3b7-411361950f7c","Type":"ContainerStarted","Data":"3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.598718 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.615581 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.618285 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.621781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.621846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.621867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.622137 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.635309 4698 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.635879 4698 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.637597 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.637650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.637668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.637698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.637719 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:39Z","lastTransitionTime":"2025-10-06T11:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.640973 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.659735 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: E1006 11:45:39.660364 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.666484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.666674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.666844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.666983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.667163 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:39Z","lastTransitionTime":"2025-10-06T11:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.680251 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: E1006 11:45:39.691806 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.698212 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.699133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.699202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.699223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.699258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.699288 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:39Z","lastTransitionTime":"2025-10-06T11:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.713762 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: E1006 11:45:39.718575 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.723568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.723595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.723606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.723657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.723670 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:39Z","lastTransitionTime":"2025-10-06T11:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.733122 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: E1006 11:45:39.739859 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.747128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.747171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.747190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.747218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.747237 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:39Z","lastTransitionTime":"2025-10-06T11:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.761714 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: E1006 11:45:39.764196 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: E1006 11:45:39.764435 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.767065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.767105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.767119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.767142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.767157 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:39Z","lastTransitionTime":"2025-10-06T11:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.779229 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.795455 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.812212 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.838642 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.854728 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.868998 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.869464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.869487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.869497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.869516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.869529 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:39Z","lastTransitionTime":"2025-10-06T11:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.888402 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.909213 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.925424 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.943661 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.960230 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.973401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.973448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.973460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.973481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.973493 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:39Z","lastTransitionTime":"2025-10-06T11:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.977607 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:39 crc kubenswrapper[4698]: I1006 11:45:39.993633 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:39Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.009280 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.024639 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.052088 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.071162 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.077055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.077106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.077121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.077150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.077170 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:40Z","lastTransitionTime":"2025-10-06T11:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.096285 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.121203 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.152398 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.170485 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.180183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.180248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.180263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.180289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.180303 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:40Z","lastTransitionTime":"2025-10-06T11:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.189648 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.283625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.283683 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.283696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.283720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.283736 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:40Z","lastTransitionTime":"2025-10-06T11:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.328385 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.328447 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.328447 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:40 crc kubenswrapper[4698]: E1006 11:45:40.328608 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:40 crc kubenswrapper[4698]: E1006 11:45:40.328826 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:40 crc kubenswrapper[4698]: E1006 11:45:40.329066 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.387544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.387615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.387633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.387662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.387684 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:40Z","lastTransitionTime":"2025-10-06T11:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.491714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.491807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.491827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.491861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.491882 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:40Z","lastTransitionTime":"2025-10-06T11:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.595662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.596070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.596088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.596120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.596160 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:40Z","lastTransitionTime":"2025-10-06T11:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.608558 4698 generic.go:334] "Generic (PLEG): container finished" podID="d89609a5-c527-41c2-a78b-e3dbc6ce8819" containerID="05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241" exitCode=0 Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.608644 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" event={"ID":"d89609a5-c527-41c2-a78b-e3dbc6ce8819","Type":"ContainerDied","Data":"05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241"} Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.633567 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.657065 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.683092 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.698692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.698784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.698802 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.698840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.698862 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:40Z","lastTransitionTime":"2025-10-06T11:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.706329 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.730285 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.768457 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.793899 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.803092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.803145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.803160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.803182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.803197 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:40Z","lastTransitionTime":"2025-10-06T11:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.818477 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.853563 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.874135 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.897959 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.905836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.905885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.905898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.905919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.905936 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:40Z","lastTransitionTime":"2025-10-06T11:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.915444 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.918996 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:40 crc kubenswrapper[4698]: E1006 11:45:40.919305 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:45:48.919283607 +0000 UTC m=+36.331975780 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.927408 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.939589 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:40 crc kubenswrapper[4698]: I1006 11:45:40.957057 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:40Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.008671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.008720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.008738 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.008765 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.008785 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:41Z","lastTransitionTime":"2025-10-06T11:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.020574 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.020668 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.020809 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.020844 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.020860 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.020866 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.021341 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.021393 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:49.021367334 +0000 UTC m=+36.434059507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.021396 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.021412 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:49.021405795 +0000 UTC m=+36.434097968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.021447 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.021468 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:49.021445756 +0000 UTC m=+36.434137949 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.021722 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.021771 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.021795 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:41 crc kubenswrapper[4698]: E1006 11:45:41.021906 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:45:49.021874217 +0000 UTC m=+36.434566540 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.112180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.112596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.112692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.112782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.112870 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:41Z","lastTransitionTime":"2025-10-06T11:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.216122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.216167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.216182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.216203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.216215 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:41Z","lastTransitionTime":"2025-10-06T11:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.319746 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.319807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.319826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.319860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.319883 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:41Z","lastTransitionTime":"2025-10-06T11:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.423843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.423907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.423925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.423952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.423970 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:41Z","lastTransitionTime":"2025-10-06T11:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.527499 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.527583 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.527612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.527649 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.527675 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:41Z","lastTransitionTime":"2025-10-06T11:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.619290 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" event={"ID":"d89609a5-c527-41c2-a78b-e3dbc6ce8819","Type":"ContainerStarted","Data":"719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.628134 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.628662 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.630822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.630880 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.630899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.630928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.630949 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:41Z","lastTransitionTime":"2025-10-06T11:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.647164 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.712151 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.716553 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.734602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.734675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.734694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.734728 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.734749 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:41Z","lastTransitionTime":"2025-10-06T11:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.740885 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.766351 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.785191 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.822750 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.842424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.842994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.843053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.843095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.843122 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:41Z","lastTransitionTime":"2025-10-06T11:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.848368 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.874644 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.896355 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.916346 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.934087 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.947353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.947411 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.947432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.947460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.947481 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:41Z","lastTransitionTime":"2025-10-06T11:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.952988 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.969928 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:41 crc kubenswrapper[4698]: I1006 11:45:41.994947 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.020281 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.041557 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.051691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.051759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.051777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.052206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.052259 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:42Z","lastTransitionTime":"2025-10-06T11:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.060824 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.077874 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.101570 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.122874 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.138184 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.153614 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.155379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.155430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.155446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.155469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.155486 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:42Z","lastTransitionTime":"2025-10-06T11:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.186082 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.208811 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.230343 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.255570 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.258720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.258799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.258812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.258838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.258855 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:42Z","lastTransitionTime":"2025-10-06T11:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.291607 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.315268 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.328886 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.328892 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:42 crc kubenswrapper[4698]: E1006 11:45:42.329076 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.329134 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:42 crc kubenswrapper[4698]: E1006 11:45:42.329147 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:42 crc kubenswrapper[4698]: E1006 11:45:42.329392 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.332125 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.358223 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.362614 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.362663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.362684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.362708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.362730 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:42Z","lastTransitionTime":"2025-10-06T11:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.466039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.466086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.466100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.466120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.466134 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:42Z","lastTransitionTime":"2025-10-06T11:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.569495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.569577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.569603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.569634 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.569658 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:42Z","lastTransitionTime":"2025-10-06T11:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.632241 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.632331 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.664854 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.672550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.672624 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.672645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.672677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.672734 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:42Z","lastTransitionTime":"2025-10-06T11:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.683497 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.697982 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.713489 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.730084 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.748505 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.769876 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.775961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.776053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.776073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.776101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.776123 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:42Z","lastTransitionTime":"2025-10-06T11:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.820190 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.841149 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.855985 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.878431 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.879237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.879307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.879329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.879360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.879381 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:42Z","lastTransitionTime":"2025-10-06T11:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.890864 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.926682 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.949001 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.971900 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.983885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.983930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.983943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.983959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.983971 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:42Z","lastTransitionTime":"2025-10-06T11:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:42 crc kubenswrapper[4698]: I1006 11:45:42.990556 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:42Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.088205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.088304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.088339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.088376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.088405 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:43Z","lastTransitionTime":"2025-10-06T11:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.191460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.191532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.191551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.191581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.191600 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:43Z","lastTransitionTime":"2025-10-06T11:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.295719 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.295799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.295825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.295861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.295887 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:43Z","lastTransitionTime":"2025-10-06T11:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.356311 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.379821 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.400775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.400849 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.400864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.400883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.400916 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:43Z","lastTransitionTime":"2025-10-06T11:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.405616 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.428809 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.453330 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.490866 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.503483 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.503810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.503933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.504119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.504240 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:43Z","lastTransitionTime":"2025-10-06T11:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.513467 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.534342 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.561799 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.586874 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.599246 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.607542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.607603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.607613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.607640 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.607651 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:43Z","lastTransitionTime":"2025-10-06T11:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.617204 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.631258 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.644722 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.659449 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.711144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.711207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.711226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.711252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.711270 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:43Z","lastTransitionTime":"2025-10-06T11:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.814794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.814855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.814867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.814890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.814908 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:43Z","lastTransitionTime":"2025-10-06T11:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.917976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.918080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.918100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.918130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:43 crc kubenswrapper[4698]: I1006 11:45:43.918153 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:43Z","lastTransitionTime":"2025-10-06T11:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.020978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.021081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.021093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.021111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.021121 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:44Z","lastTransitionTime":"2025-10-06T11:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.124212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.124281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.124299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.124327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.124347 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:44Z","lastTransitionTime":"2025-10-06T11:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.227721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.227796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.227823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.227856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.227881 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:44Z","lastTransitionTime":"2025-10-06T11:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.328474 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.328507 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:44 crc kubenswrapper[4698]: E1006 11:45:44.328670 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:44 crc kubenswrapper[4698]: E1006 11:45:44.328791 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.328990 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:44 crc kubenswrapper[4698]: E1006 11:45:44.329580 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.331647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.331699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.331716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.331742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.331760 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:44Z","lastTransitionTime":"2025-10-06T11:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.435369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.435617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.435804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.436078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.436279 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:44Z","lastTransitionTime":"2025-10-06T11:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.540926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.541401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.541545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.542078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.542213 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:44Z","lastTransitionTime":"2025-10-06T11:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.644674 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/0.log" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.644817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.644888 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.644912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.644945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.644968 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:44Z","lastTransitionTime":"2025-10-06T11:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.651222 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345" exitCode=1 Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.651369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.652722 4698 scope.go:117] "RemoveContainer" containerID="8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.678655 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.715817 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.749551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.749612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.749633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.749661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.749679 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:44Z","lastTransitionTime":"2025-10-06T11:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.758524 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:44Z\\\",\\\"message\\\":\\\"eflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:45:44.200704 5987 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:44.200720 5987 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:44.200728 5987 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:44.202256 5987 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 11:45:44.202309 5987 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 11:45:44.202350 5987 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:44.202361 5987 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:44.202447 5987 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:44.202471 5987 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:44.202486 5987 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:44.202452 5987 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:45:44.202497 5987 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:45:44.202499 5987 factory.go:656] Stopping watch factory\\\\nI1006 11:45:44.202531 5987 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.788108 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.815095 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.838445 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.852890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.852936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.852948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.852973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.852988 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:44Z","lastTransitionTime":"2025-10-06T11:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.860143 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.886437 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.908807 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.930209 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.949726 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.956247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.956310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.956330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.956358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.956375 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:44Z","lastTransitionTime":"2025-10-06T11:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.966397 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:44 crc kubenswrapper[4698]: I1006 11:45:44.992091 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:44Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.011628 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.044096 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.060058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.060117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.060131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.060155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.060175 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:45Z","lastTransitionTime":"2025-10-06T11:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.163377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.163422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.163434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.163454 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.163466 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:45Z","lastTransitionTime":"2025-10-06T11:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.266565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.266615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.266625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.266646 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.266658 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:45Z","lastTransitionTime":"2025-10-06T11:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.369907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.369954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.369963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.369985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.369997 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:45Z","lastTransitionTime":"2025-10-06T11:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.472905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.472988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.473044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.473081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.473104 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:45Z","lastTransitionTime":"2025-10-06T11:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.575599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.575651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.575660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.575677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.575692 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:45Z","lastTransitionTime":"2025-10-06T11:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.665542 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/0.log" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.671817 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100"} Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.672504 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.680908 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.680967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.680985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.681045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.681066 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:45Z","lastTransitionTime":"2025-10-06T11:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.695932 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.714841 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.735001 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.752007 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.779619 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.783892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.783928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.783937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.783953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.783964 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:45Z","lastTransitionTime":"2025-10-06T11:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.809667 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:44Z\\\",\\\"message\\\":\\\"eflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:45:44.200704 5987 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:44.200720 5987 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:44.200728 5987 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:44.202256 5987 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 11:45:44.202309 5987 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 11:45:44.202350 5987 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:44.202361 5987 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:44.202447 5987 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:44.202471 5987 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:44.202486 5987 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:44.202452 5987 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:45:44.202497 5987 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:45:44.202499 5987 factory.go:656] Stopping watch factory\\\\nI1006 11:45:44.202531 5987 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.830759 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.845865 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.860853 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.876444 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.886907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.886981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.886998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.887046 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.887061 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:45Z","lastTransitionTime":"2025-10-06T11:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.891434 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.916232 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.934935 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.950085 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.964557 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:45Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.989926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.989997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.990057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.990096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:45 crc kubenswrapper[4698]: I1006 11:45:45.990125 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:45Z","lastTransitionTime":"2025-10-06T11:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.093248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.093297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.093309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.093333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.093346 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:46Z","lastTransitionTime":"2025-10-06T11:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.197204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.197353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.197378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.197406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.197430 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:46Z","lastTransitionTime":"2025-10-06T11:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.301581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.301659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.301682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.301716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.301741 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:46Z","lastTransitionTime":"2025-10-06T11:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.328238 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.328285 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.328239 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:46 crc kubenswrapper[4698]: E1006 11:45:46.328441 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:46 crc kubenswrapper[4698]: E1006 11:45:46.328594 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:46 crc kubenswrapper[4698]: E1006 11:45:46.328754 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.403825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.403885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.403902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.403927 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.403946 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:46Z","lastTransitionTime":"2025-10-06T11:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.508177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.508252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.508272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.508301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.508320 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:46Z","lastTransitionTime":"2025-10-06T11:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.612630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.612694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.612711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.612749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.612773 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:46Z","lastTransitionTime":"2025-10-06T11:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.679673 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/1.log" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.680762 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/0.log" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.685672 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100" exitCode=1 Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.685751 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.685869 4698 scope.go:117] "RemoveContainer" containerID="8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.687248 4698 scope.go:117] "RemoveContainer" containerID="60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100" Oct 06 11:45:46 crc kubenswrapper[4698]: E1006 11:45:46.689574 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.711461 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.719141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.719236 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.719260 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.719295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.719325 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:46Z","lastTransitionTime":"2025-10-06T11:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.735367 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.760121 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.795114 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea03ec00f26461a6a68be9a5023af33421c5c2cf6023a0b7f28a107c6545345\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:44Z\\\",\\\"message\\\":\\\"eflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 11:45:44.200704 5987 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:44.200720 5987 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:44.200728 5987 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:44.202256 5987 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 11:45:44.202309 5987 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 11:45:44.202350 5987 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:44.202361 5987 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:44.202447 5987 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:44.202471 5987 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:44.202486 5987 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:44.202452 5987 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:45:44.202497 5987 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:45:44.202499 5987 factory.go:656] Stopping watch factory\\\\nI1006 11:45:44.202531 5987 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:45Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:45:45.750915 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:45:45.750984 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:45.751104 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:45:45.751099 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:45.751122 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:45:45.751143 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:45:45.751174 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:45.751217 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:45.751233 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:45.751300 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:45.751325 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:45.751334 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:45:45.751356 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:45.751390 6107 factory.go:656] Stopping watch factory\\\\nI1006 11:45:45.751419 6107 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.823954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.824040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.824060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.824089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.824109 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:46Z","lastTransitionTime":"2025-10-06T11:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.848817 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.882832 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.910984 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.927602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.927659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.927669 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.927689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.927700 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:46Z","lastTransitionTime":"2025-10-06T11:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.932070 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.946829 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.971005 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:46 crc kubenswrapper[4698]: I1006 11:45:46.986894 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:46Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.004792 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.024110 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.031276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.031336 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.031354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.031380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.031398 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:47Z","lastTransitionTime":"2025-10-06T11:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.045191 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.065626 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.135527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.136113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.136210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.136317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.136417 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:47Z","lastTransitionTime":"2025-10-06T11:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.240702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.240773 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.240791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.240823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.240846 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:47Z","lastTransitionTime":"2025-10-06T11:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.343739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.343788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.343799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.343821 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.343834 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:47Z","lastTransitionTime":"2025-10-06T11:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.447088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.447150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.447167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.447195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.447214 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:47Z","lastTransitionTime":"2025-10-06T11:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.550983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.551095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.551114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.551148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.551169 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:47Z","lastTransitionTime":"2025-10-06T11:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.654839 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.654896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.654909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.654935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.654952 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:47Z","lastTransitionTime":"2025-10-06T11:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.692203 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/1.log" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.698388 4698 scope.go:117] "RemoveContainer" containerID="60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100" Oct 06 11:45:47 crc kubenswrapper[4698]: E1006 11:45:47.698716 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.720270 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.741874 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.757714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.757791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.757812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.757843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.757863 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:47Z","lastTransitionTime":"2025-10-06T11:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.763971 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.782064 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.811338 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.840831 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:45Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:45:45.750915 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:45:45.750984 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:45.751104 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:45:45.751099 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:45.751122 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:45:45.751143 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:45:45.751174 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:45.751217 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:45.751233 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:45.751300 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:45.751325 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:45.751334 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:45:45.751356 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:45.751390 6107 factory.go:656] Stopping watch factory\\\\nI1006 11:45:45.751419 6107 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.860982 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.861979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.862076 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.862099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.862132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.862157 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:47Z","lastTransitionTime":"2025-10-06T11:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.879144 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.894622 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.915458 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.927780 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.956858 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.964957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.965050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.965070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.965099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.965119 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:47Z","lastTransitionTime":"2025-10-06T11:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.975879 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:47 crc kubenswrapper[4698]: I1006 11:45:47.994081 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:47Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.012125 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc"] Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.013222 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.013900 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.016545 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.017949 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.035703 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.057551 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.068585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.068659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.068681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.068715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.068738 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:48Z","lastTransitionTime":"2025-10-06T11:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.081921 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.099795 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.111657 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwdmm\" (UniqueName: \"kubernetes.io/projected/11609fb5-c3f2-4613-bee1-57ad7ff82cee-kube-api-access-kwdmm\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.111755 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11609fb5-c3f2-4613-bee1-57ad7ff82cee-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.111796 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11609fb5-c3f2-4613-bee1-57ad7ff82cee-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.112045 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11609fb5-c3f2-4613-bee1-57ad7ff82cee-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.124671 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.157989 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:45Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:45:45.750915 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:45:45.750984 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:45.751104 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:45:45.751099 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:45.751122 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:45:45.751143 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:45:45.751174 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:45.751217 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:45.751233 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:45.751300 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:45.751325 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:45.751334 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:45:45.751356 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:45.751390 6107 factory.go:656] Stopping watch factory\\\\nI1006 11:45:45.751419 6107 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.171897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.171962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.171986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.172058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.172090 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:48Z","lastTransitionTime":"2025-10-06T11:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.182331 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.199964 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.213740 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11609fb5-c3f2-4613-bee1-57ad7ff82cee-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.213799 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11609fb5-c3f2-4613-bee1-57ad7ff82cee-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.213858 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11609fb5-c3f2-4613-bee1-57ad7ff82cee-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.213969 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwdmm\" (UniqueName: \"kubernetes.io/projected/11609fb5-c3f2-4613-bee1-57ad7ff82cee-kube-api-access-kwdmm\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.215093 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11609fb5-c3f2-4613-bee1-57ad7ff82cee-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.215265 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11609fb5-c3f2-4613-bee1-57ad7ff82cee-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.224047 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11609fb5-c3f2-4613-bee1-57ad7ff82cee-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.225644 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.243717 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwdmm\" (UniqueName: \"kubernetes.io/projected/11609fb5-c3f2-4613-bee1-57ad7ff82cee-kube-api-access-kwdmm\") pod \"ovnkube-control-plane-749d76644c-xxgwc\" (UID: \"11609fb5-c3f2-4613-bee1-57ad7ff82cee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.251435 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.273319 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.276006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.276144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.276174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.276216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.276244 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:48Z","lastTransitionTime":"2025-10-06T11:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.292834 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.312233 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.328116 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.328151 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.328196 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:48 crc kubenswrapper[4698]: E1006 11:45:48.328334 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:48 crc kubenswrapper[4698]: E1006 11:45:48.328501 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:48 crc kubenswrapper[4698]: E1006 11:45:48.328683 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.334768 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.352165 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: W1006 11:45:48.358172 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11609fb5_c3f2_4613_bee1_57ad7ff82cee.slice/crio-b36aa47f13aff05182e99854d4fafd28f24e2312feaceac94a14090af29d2480 WatchSource:0}: Error finding container b36aa47f13aff05182e99854d4fafd28f24e2312feaceac94a14090af29d2480: Status 404 returned error can't find the container with id b36aa47f13aff05182e99854d4fafd28f24e2312feaceac94a14090af29d2480 Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.376481 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.381681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.381718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.381731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.381754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.381769 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:48Z","lastTransitionTime":"2025-10-06T11:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.398462 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:48Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.485862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.485931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.485990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.486062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.486086 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:48Z","lastTransitionTime":"2025-10-06T11:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.589684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.589753 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.589772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.589804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.589961 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:48Z","lastTransitionTime":"2025-10-06T11:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.694125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.694184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.694198 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.694218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.694233 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:48Z","lastTransitionTime":"2025-10-06T11:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.702522 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" event={"ID":"11609fb5-c3f2-4613-bee1-57ad7ff82cee","Type":"ContainerStarted","Data":"69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.702602 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" event={"ID":"11609fb5-c3f2-4613-bee1-57ad7ff82cee","Type":"ContainerStarted","Data":"b36aa47f13aff05182e99854d4fafd28f24e2312feaceac94a14090af29d2480"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.797068 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.797128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.797152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.797186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.797205 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:48Z","lastTransitionTime":"2025-10-06T11:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.900141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.900183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.900196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.900218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.900236 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:48Z","lastTransitionTime":"2025-10-06T11:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:48 crc kubenswrapper[4698]: I1006 11:45:48.921957 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:45:48 crc kubenswrapper[4698]: E1006 11:45:48.922292 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:46:04.922268191 +0000 UTC m=+52.334960374 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.002763 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.003130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.003138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.003155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.003165 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.023336 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.023427 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.023484 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.023532 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023662 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023550 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023694 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023722 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023760 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023765 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:46:05.023740293 +0000 UTC m=+52.436432496 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023780 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023835 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:46:05.023783064 +0000 UTC m=+52.436475237 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023838 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023853 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:46:05.023846686 +0000 UTC m=+52.436538859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023869 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.023964 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:46:05.023934068 +0000 UTC m=+52.436626281 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.107349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.107409 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.107424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.107448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.107461 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.158062 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-v8wrg"] Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.158872 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.159000 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.174457 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.194451 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.210487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.210527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.210537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.210557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.210568 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.213510 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.225684 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhsx6\" (UniqueName: \"kubernetes.io/projected/13806999-a8a3-4c95-b41e-6def8c208f4b-kube-api-access-lhsx6\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.225796 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.230961 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.258093 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.283560 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:45Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:45:45.750915 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:45:45.750984 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:45.751104 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:45:45.751099 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:45.751122 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:45:45.751143 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:45:45.751174 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:45.751217 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:45.751233 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:45.751300 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:45.751325 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:45.751334 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:45:45.751356 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:45.751390 6107 factory.go:656] Stopping watch factory\\\\nI1006 11:45:45.751419 6107 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.298512 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.313229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.313290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.313310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.313338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.313358 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.322087 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.326322 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhsx6\" (UniqueName: \"kubernetes.io/projected/13806999-a8a3-4c95-b41e-6def8c208f4b-kube-api-access-lhsx6\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.326400 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.326538 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.326603 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs podName:13806999-a8a3-4c95-b41e-6def8c208f4b nodeName:}" failed. No retries permitted until 2025-10-06 11:45:49.826582071 +0000 UTC m=+37.239274254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs") pod "network-metrics-daemon-v8wrg" (UID: "13806999-a8a3-4c95-b41e-6def8c208f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.342942 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.350657 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhsx6\" (UniqueName: \"kubernetes.io/projected/13806999-a8a3-4c95-b41e-6def8c208f4b-kube-api-access-lhsx6\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.365460 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.382983 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.397218 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.416165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.416227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.416243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.416271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.416287 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.422724 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.443198 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.460801 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.477905 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.494325 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.520178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.520256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.520278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.520311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.520330 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.624295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.624367 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.624382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.624406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.624423 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.710669 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" event={"ID":"11609fb5-c3f2-4613-bee1-57ad7ff82cee","Type":"ContainerStarted","Data":"3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.727244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.727322 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.727344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.727375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.727396 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.733171 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.754136 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.772431 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.794178 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.812139 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.830617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.830700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.830731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.830769 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.830793 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.833425 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.833715 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.833855 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs podName:13806999-a8a3-4c95-b41e-6def8c208f4b nodeName:}" failed. No retries permitted until 2025-10-06 11:45:50.83381935 +0000 UTC m=+38.246511653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs") pod "network-metrics-daemon-v8wrg" (UID: "13806999-a8a3-4c95-b41e-6def8c208f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.838155 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.870823 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:45Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:45:45.750915 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:45:45.750984 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:45.751104 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:45:45.751099 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:45.751122 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:45:45.751143 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:45:45.751174 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:45.751217 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:45.751233 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:45.751300 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:45.751325 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:45.751334 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:45:45.751356 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:45.751390 6107 factory.go:656] Stopping watch factory\\\\nI1006 11:45:45.751419 6107 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.892737 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.909741 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.934243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.934322 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.934354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.934394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.934420 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.935383 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.956993 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.970800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.970869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.970889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.970915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.970935 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.980339 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: E1006 11:45:49.993050 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.998174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.998235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.998258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.998289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:49 crc kubenswrapper[4698]: I1006 11:45:49.998312 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:49Z","lastTransitionTime":"2025-10-06T11:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.001850 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:49Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.016655 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.025943 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.030124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.030196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.030212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.030233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.030248 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.051505 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.051624 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.056838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.056912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.056931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.056960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.056982 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.069081 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.078834 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.084053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.084126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.084144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.084169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.084186 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.086196 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.104385 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.104619 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.107684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.107759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.107772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.107798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.107815 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.156433 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.190425 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:45Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:45:45.750915 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:45:45.750984 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:45.751104 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:45:45.751099 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:45.751122 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:45:45.751143 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:45:45.751174 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:45.751217 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:45.751233 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:45.751300 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:45.751325 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:45.751334 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:45:45.751356 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:45.751390 6107 factory.go:656] Stopping watch factory\\\\nI1006 11:45:45.751419 6107 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.205904 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.211601 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.211644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.211656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.211678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.211691 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.226570 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.243656 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.265140 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.285916 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.308675 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.314901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.314959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.314973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.314996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.315034 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.326539 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.328747 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.328812 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.328865 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.328924 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.328955 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.329221 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.329288 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.329360 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.349723 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.367055 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.386175 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.407555 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.418345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.418429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.418450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.418485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.418511 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.425800 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.458950 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.478184 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.500039 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.522554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.522649 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.522668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.523109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.523362 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.532239 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:50Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.628286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.628547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.628565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.628596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.628616 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.731879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.731957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.731977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.732003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.732060 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.835129 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.835210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.835235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.835268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.835287 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.846620 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.846890 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:45:50 crc kubenswrapper[4698]: E1006 11:45:50.847047 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs podName:13806999-a8a3-4c95-b41e-6def8c208f4b nodeName:}" failed. No retries permitted until 2025-10-06 11:45:52.846980444 +0000 UTC m=+40.259672647 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs") pod "network-metrics-daemon-v8wrg" (UID: "13806999-a8a3-4c95-b41e-6def8c208f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.939143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.939231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.939256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.939291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:50 crc kubenswrapper[4698]: I1006 11:45:50.939317 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:50Z","lastTransitionTime":"2025-10-06T11:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.043285 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.043372 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.043397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.043429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.043454 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:51Z","lastTransitionTime":"2025-10-06T11:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.147256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.147328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.147352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.147383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.147404 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:51Z","lastTransitionTime":"2025-10-06T11:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.251707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.251801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.251822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.251854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.251874 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:51Z","lastTransitionTime":"2025-10-06T11:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.355542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.355609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.355630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.355656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.355677 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:51Z","lastTransitionTime":"2025-10-06T11:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.459587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.459656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.459673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.459701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.459723 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:51Z","lastTransitionTime":"2025-10-06T11:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.562833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.562915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.562934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.562964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.562985 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:51Z","lastTransitionTime":"2025-10-06T11:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.667044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.667119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.667141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.667177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.667206 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:51Z","lastTransitionTime":"2025-10-06T11:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.770645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.770735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.770756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.770788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.770813 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:51Z","lastTransitionTime":"2025-10-06T11:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.875304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.875412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.875437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.875483 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.875512 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:51Z","lastTransitionTime":"2025-10-06T11:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.980060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.980156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.980177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.980210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:51 crc kubenswrapper[4698]: I1006 11:45:51.980263 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:51Z","lastTransitionTime":"2025-10-06T11:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.083445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.083521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.083540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.083570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.083595 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:52Z","lastTransitionTime":"2025-10-06T11:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.187437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.187519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.187538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.187570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.187592 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:52Z","lastTransitionTime":"2025-10-06T11:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.290912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.290972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.290989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.291064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.291083 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:52Z","lastTransitionTime":"2025-10-06T11:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.327930 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.328062 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.328177 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.328271 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:52 crc kubenswrapper[4698]: E1006 11:45:52.328441 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:52 crc kubenswrapper[4698]: E1006 11:45:52.328642 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:45:52 crc kubenswrapper[4698]: E1006 11:45:52.328879 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:52 crc kubenswrapper[4698]: E1006 11:45:52.329150 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.394893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.394975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.394999 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.395071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.395094 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:52Z","lastTransitionTime":"2025-10-06T11:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.499413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.499494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.499514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.499543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.499562 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:52Z","lastTransitionTime":"2025-10-06T11:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.603276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.603387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.603444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.603473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.603525 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:52Z","lastTransitionTime":"2025-10-06T11:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.707868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.707998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.708065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.708096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.708147 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:52Z","lastTransitionTime":"2025-10-06T11:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.810882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.810969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.810991 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.811062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.811091 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:52Z","lastTransitionTime":"2025-10-06T11:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.872862 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:52 crc kubenswrapper[4698]: E1006 11:45:52.873192 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:45:52 crc kubenswrapper[4698]: E1006 11:45:52.873318 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs podName:13806999-a8a3-4c95-b41e-6def8c208f4b nodeName:}" failed. No retries permitted until 2025-10-06 11:45:56.873288314 +0000 UTC m=+44.285980527 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs") pod "network-metrics-daemon-v8wrg" (UID: "13806999-a8a3-4c95-b41e-6def8c208f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.914544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.914626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.914645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.914677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:52 crc kubenswrapper[4698]: I1006 11:45:52.914697 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:52Z","lastTransitionTime":"2025-10-06T11:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.018507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.018591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.018608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.018641 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.018659 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:53Z","lastTransitionTime":"2025-10-06T11:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.122982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.123087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.123107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.123138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.123164 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:53Z","lastTransitionTime":"2025-10-06T11:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.227205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.227270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.227288 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.227316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.227338 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:53Z","lastTransitionTime":"2025-10-06T11:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.332121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.332200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.332218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.332248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.332268 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:53Z","lastTransitionTime":"2025-10-06T11:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.351791 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.370380 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.395266 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.432556 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:45Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:45:45.750915 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:45:45.750984 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:45.751104 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:45:45.751099 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:45.751122 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:45:45.751143 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:45:45.751174 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:45.751217 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:45.751233 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:45.751300 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:45.751325 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:45.751334 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:45:45.751356 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:45.751390 6107 factory.go:656] Stopping watch factory\\\\nI1006 11:45:45.751419 6107 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.435876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.435935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.435948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.435969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.435984 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:53Z","lastTransitionTime":"2025-10-06T11:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.450350 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.473838 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.494951 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.511668 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.533086 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.539169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.539246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.539282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.539310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.539324 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:53Z","lastTransitionTime":"2025-10-06T11:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.551852 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.615837 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.643163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.643240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.643255 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.643281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.643303 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:53Z","lastTransitionTime":"2025-10-06T11:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.649415 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.666064 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.678960 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.692314 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.710247 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.730606 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:45:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.746203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.746275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.746290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.746311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.746325 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:53Z","lastTransitionTime":"2025-10-06T11:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.849503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.849572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.849592 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.849620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.849640 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:53Z","lastTransitionTime":"2025-10-06T11:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.952910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.953056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.953077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.953104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:53 crc kubenswrapper[4698]: I1006 11:45:53.953160 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:53Z","lastTransitionTime":"2025-10-06T11:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.057491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.057548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.057567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.057594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.057615 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:54Z","lastTransitionTime":"2025-10-06T11:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.161607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.161723 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.161762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.161805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.161836 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:54Z","lastTransitionTime":"2025-10-06T11:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.265340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.265434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.265453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.265491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.265511 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:54Z","lastTransitionTime":"2025-10-06T11:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.328957 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.329179 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.329255 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.329597 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:54 crc kubenswrapper[4698]: E1006 11:45:54.329557 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:54 crc kubenswrapper[4698]: E1006 11:45:54.329795 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:45:54 crc kubenswrapper[4698]: E1006 11:45:54.330062 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:54 crc kubenswrapper[4698]: E1006 11:45:54.330264 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.370296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.370369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.370390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.370420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.370439 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:54Z","lastTransitionTime":"2025-10-06T11:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.473690 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.473756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.473798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.473827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.473849 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:54Z","lastTransitionTime":"2025-10-06T11:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.578644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.578719 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.578734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.578757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.578772 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:54Z","lastTransitionTime":"2025-10-06T11:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.681565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.681642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.681668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.681702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.681721 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:54Z","lastTransitionTime":"2025-10-06T11:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.786207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.786273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.786294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.786324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.786346 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:54Z","lastTransitionTime":"2025-10-06T11:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.890400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.890695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.890726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.890767 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.890795 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:54Z","lastTransitionTime":"2025-10-06T11:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.994992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.995086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.995107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.995132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:54 crc kubenswrapper[4698]: I1006 11:45:54.995150 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:54Z","lastTransitionTime":"2025-10-06T11:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.098757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.098858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.098878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.098907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.098938 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:55Z","lastTransitionTime":"2025-10-06T11:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.202327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.202394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.202413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.202439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.202458 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:55Z","lastTransitionTime":"2025-10-06T11:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.306902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.306969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.306988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.307049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.307073 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:55Z","lastTransitionTime":"2025-10-06T11:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.410385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.410448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.410467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.410499 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.410522 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:55Z","lastTransitionTime":"2025-10-06T11:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.514488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.514554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.514573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.514638 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.514659 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:55Z","lastTransitionTime":"2025-10-06T11:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.618402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.618476 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.618494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.618525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.618546 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:55Z","lastTransitionTime":"2025-10-06T11:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.722564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.722632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.722653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.722680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.722701 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:55Z","lastTransitionTime":"2025-10-06T11:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.825845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.825916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.825937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.825964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.825985 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:55Z","lastTransitionTime":"2025-10-06T11:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.929665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.929737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.929755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.929783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:55 crc kubenswrapper[4698]: I1006 11:45:55.929803 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:55Z","lastTransitionTime":"2025-10-06T11:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.033871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.040984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.041371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.041416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.041436 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:56Z","lastTransitionTime":"2025-10-06T11:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.145186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.145536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.145781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.145949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.146120 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:56Z","lastTransitionTime":"2025-10-06T11:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.250058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.250139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.250152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.250173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.250190 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:56Z","lastTransitionTime":"2025-10-06T11:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.328451 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.328529 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.328549 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.328661 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:56 crc kubenswrapper[4698]: E1006 11:45:56.328662 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:56 crc kubenswrapper[4698]: E1006 11:45:56.328781 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:45:56 crc kubenswrapper[4698]: E1006 11:45:56.329065 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:56 crc kubenswrapper[4698]: E1006 11:45:56.329214 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.354060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.354102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.354123 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.354148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.354166 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:56Z","lastTransitionTime":"2025-10-06T11:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.457010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.457116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.457135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.457164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.457184 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:56Z","lastTransitionTime":"2025-10-06T11:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.560611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.560659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.560670 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.560688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.560700 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:56Z","lastTransitionTime":"2025-10-06T11:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.663554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.663628 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.663648 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.663678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.663698 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:56Z","lastTransitionTime":"2025-10-06T11:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.767598 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.767668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.767687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.767714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.767737 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:56Z","lastTransitionTime":"2025-10-06T11:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.870658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.870746 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.870774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.870810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.870832 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:56Z","lastTransitionTime":"2025-10-06T11:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.928071 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:56 crc kubenswrapper[4698]: E1006 11:45:56.928446 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:45:56 crc kubenswrapper[4698]: E1006 11:45:56.928669 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs podName:13806999-a8a3-4c95-b41e-6def8c208f4b nodeName:}" failed. No retries permitted until 2025-10-06 11:46:04.928637011 +0000 UTC m=+52.341329214 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs") pod "network-metrics-daemon-v8wrg" (UID: "13806999-a8a3-4c95-b41e-6def8c208f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.974627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.974693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.974714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.974744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:56 crc kubenswrapper[4698]: I1006 11:45:56.974767 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:56Z","lastTransitionTime":"2025-10-06T11:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.078793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.078871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.078957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.078991 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.079149 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:57Z","lastTransitionTime":"2025-10-06T11:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.182922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.182994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.183057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.183086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.183108 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:57Z","lastTransitionTime":"2025-10-06T11:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.286446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.286494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.286513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.286542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.286562 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:57Z","lastTransitionTime":"2025-10-06T11:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.390327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.390392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.390410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.390439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.390459 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:57Z","lastTransitionTime":"2025-10-06T11:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.493057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.493153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.493180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.493220 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.493245 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:57Z","lastTransitionTime":"2025-10-06T11:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.595938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.596080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.596104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.596145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.596163 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:57Z","lastTransitionTime":"2025-10-06T11:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.700771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.700845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.700862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.700891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.700909 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:57Z","lastTransitionTime":"2025-10-06T11:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.804415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.804515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.804540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.804575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.804600 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:57Z","lastTransitionTime":"2025-10-06T11:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.907876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.907930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.907944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.907969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:57 crc kubenswrapper[4698]: I1006 11:45:57.907987 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:57Z","lastTransitionTime":"2025-10-06T11:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.011710 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.011814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.011832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.011866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.011881 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:58Z","lastTransitionTime":"2025-10-06T11:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.116400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.116461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.116481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.116506 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.116523 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:58Z","lastTransitionTime":"2025-10-06T11:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.220503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.220600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.220627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.220661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.220683 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:58Z","lastTransitionTime":"2025-10-06T11:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.324118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.324183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.324202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.324233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.324256 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:58Z","lastTransitionTime":"2025-10-06T11:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.328504 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.328530 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.328530 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:45:58 crc kubenswrapper[4698]: E1006 11:45:58.328693 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.328736 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:45:58 crc kubenswrapper[4698]: E1006 11:45:58.328879 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:45:58 crc kubenswrapper[4698]: E1006 11:45:58.329116 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:45:58 crc kubenswrapper[4698]: E1006 11:45:58.329214 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.427387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.427490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.427509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.427581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.427603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:58Z","lastTransitionTime":"2025-10-06T11:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.531568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.531692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.531716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.531750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.531779 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:58Z","lastTransitionTime":"2025-10-06T11:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.636585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.636655 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.636673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.636702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.636722 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:58Z","lastTransitionTime":"2025-10-06T11:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.741415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.742094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.742300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.742517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.742731 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:58Z","lastTransitionTime":"2025-10-06T11:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.846817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.847276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.847827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.848288 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.848464 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:58Z","lastTransitionTime":"2025-10-06T11:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.953242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.953307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.953325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.953353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:58 crc kubenswrapper[4698]: I1006 11:45:58.953372 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:58Z","lastTransitionTime":"2025-10-06T11:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.056837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.056894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.056960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.056994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.057050 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:59Z","lastTransitionTime":"2025-10-06T11:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.160526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.160594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.160608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.160631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.160646 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:59Z","lastTransitionTime":"2025-10-06T11:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.263818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.263893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.263912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.263941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.263959 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:59Z","lastTransitionTime":"2025-10-06T11:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.367680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.367737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.367756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.367781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.367802 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:59Z","lastTransitionTime":"2025-10-06T11:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.470934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.471056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.471077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.471113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.471144 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:59Z","lastTransitionTime":"2025-10-06T11:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.574707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.574791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.574812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.574842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.574863 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:59Z","lastTransitionTime":"2025-10-06T11:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.677632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.677697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.677718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.677749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.677770 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:59Z","lastTransitionTime":"2025-10-06T11:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.780461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.780522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.780542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.780600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.780620 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:59Z","lastTransitionTime":"2025-10-06T11:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.883986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.884065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.884083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.884106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.884125 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:59Z","lastTransitionTime":"2025-10-06T11:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.987913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.987968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.987985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.988051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:45:59 crc kubenswrapper[4698]: I1006 11:45:59.988074 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:45:59Z","lastTransitionTime":"2025-10-06T11:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.091923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.092052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.092078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.092115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.092143 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.195184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.195640 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.195723 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.195842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.195936 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.299129 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.299538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.299668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.299804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.300003 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.328639 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.328742 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.328742 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.328758 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:00 crc kubenswrapper[4698]: E1006 11:46:00.328889 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:00 crc kubenswrapper[4698]: E1006 11:46:00.329084 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:00 crc kubenswrapper[4698]: E1006 11:46:00.329187 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:00 crc kubenswrapper[4698]: E1006 11:46:00.329255 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.330758 4698 scope.go:117] "RemoveContainer" containerID="60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.403476 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.403963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.403982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.404042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.404063 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.419739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.419836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.419862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.419901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.419928 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: E1006 11:46:00.442978 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.448443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.448727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.448857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.448991 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.449157 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: E1006 11:46:00.479741 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.486321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.486385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.486404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.486437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.486457 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: E1006 11:46:00.509927 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.515680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.515822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.515848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.515919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.515948 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: E1006 11:46:00.538527 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.543738 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.543935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.544110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.544289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.544415 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: E1006 11:46:00.574614 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: E1006 11:46:00.574832 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.578117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.578176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.578198 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.578228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.578249 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.682550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.682613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.682630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.682659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.682680 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.763415 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/1.log" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.769427 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.770404 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.785886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.785981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.786008 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.786087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.786117 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.798259 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.820980 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.844002 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.866661 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.888805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.888856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.888869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.888891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.888906 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.908985 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.926580 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.944747 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.966991 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.991657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.991703 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.991715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.991737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.991752 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:00Z","lastTransitionTime":"2025-10-06T11:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:00 crc kubenswrapper[4698]: I1006 11:46:00.991893 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:45Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:45:45.750915 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:45:45.750984 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:45.751104 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:45:45.751099 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:45.751122 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:45:45.751143 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:45:45.751174 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:45.751217 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:45.751233 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:45.751300 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:45.751325 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:45.751334 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:45:45.751356 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:45.751390 6107 factory.go:656] Stopping watch factory\\\\nI1006 11:45:45.751419 6107 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:00Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.011054 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.023166 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.035057 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.046074 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.057690 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.070403 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.083559 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.095516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.095568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.095580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.095605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.095618 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:01Z","lastTransitionTime":"2025-10-06T11:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.101408 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.199107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.199538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.199648 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.199775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.199955 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:01Z","lastTransitionTime":"2025-10-06T11:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.303998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.304542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.304696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.304848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.304987 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:01Z","lastTransitionTime":"2025-10-06T11:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.408951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.409032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.409047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.409070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.409084 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:01Z","lastTransitionTime":"2025-10-06T11:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.511553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.511623 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.511641 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.511673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.511695 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:01Z","lastTransitionTime":"2025-10-06T11:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.614946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.615103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.615134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.615164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.615185 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:01Z","lastTransitionTime":"2025-10-06T11:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.718812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.718874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.718890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.718918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.718937 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:01Z","lastTransitionTime":"2025-10-06T11:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.777408 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/2.log" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.778362 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/1.log" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.782085 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29" exitCode=1 Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.782135 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29"} Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.782195 4698 scope.go:117] "RemoveContainer" containerID="60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.783210 4698 scope.go:117] "RemoveContainer" containerID="856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29" Oct 06 11:46:01 crc kubenswrapper[4698]: E1006 11:46:01.783515 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.819004 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60975c7179c42013892c4718d8c3f4b22cc3c724fd2572f54fdea8538cad0100\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:45:45Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 11:45:45.750915 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 11:45:45.750984 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1006 11:45:45.751104 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:45:45.751099 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 11:45:45.751122 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:45:45.751143 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:45:45.751174 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:45:45.751217 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:45:45.751233 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:45:45.751300 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:45:45.751325 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:45:45.751334 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1006 11:45:45.751356 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:45:45.751390 6107 factory.go:656] Stopping watch factory\\\\nI1006 11:45:45.751419 6107 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:45:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:01Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:46:01.388688 6296 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:46:01.388698 6296 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:46:01.388786 6296 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:46:01.388802 6296 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:46:01.388807 6296 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:46:01.388817 6296 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:46:01.388887 6296 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:46:01.388890 6296 factory.go:656] Stopping watch factory\\\\nI1006 11:46:01.388886 6296 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:46:01.389187 6296 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:46:01.389464 6296 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:46:01.389605 6296 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:46:01.389714 6296 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:46:01.389911 6296 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.821856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.821929 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.821947 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.821981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.822001 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:01Z","lastTransitionTime":"2025-10-06T11:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.839807 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.861596 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.922403 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.925439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.925512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.925531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.925564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.925593 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:01Z","lastTransitionTime":"2025-10-06T11:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.950195 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.972274 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:01 crc kubenswrapper[4698]: I1006 11:46:01.994567 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:01Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.010108 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.030200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.030261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.030282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.030311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.030330 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:02Z","lastTransitionTime":"2025-10-06T11:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.030871 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.049387 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.071808 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.093954 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.115669 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.133740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.133840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.133862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.133893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.133912 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:02Z","lastTransitionTime":"2025-10-06T11:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.150042 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.173916 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.197620 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.220595 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.237663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.237726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.237748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.237777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.237797 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:02Z","lastTransitionTime":"2025-10-06T11:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.328878 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.328992 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.329062 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.329075 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:02 crc kubenswrapper[4698]: E1006 11:46:02.329231 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:02 crc kubenswrapper[4698]: E1006 11:46:02.329382 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:02 crc kubenswrapper[4698]: E1006 11:46:02.329574 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:02 crc kubenswrapper[4698]: E1006 11:46:02.329766 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.341777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.341829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.341845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.341869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.341892 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:02Z","lastTransitionTime":"2025-10-06T11:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.445267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.445327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.445342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.445366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.445383 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:02Z","lastTransitionTime":"2025-10-06T11:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.548711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.548776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.548795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.548821 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.548841 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:02Z","lastTransitionTime":"2025-10-06T11:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.652934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.652986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.652996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.653035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.653050 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:02Z","lastTransitionTime":"2025-10-06T11:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.756307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.756372 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.756392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.756419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.756439 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:02Z","lastTransitionTime":"2025-10-06T11:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.788649 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/2.log" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.794858 4698 scope.go:117] "RemoveContainer" containerID="856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29" Oct 06 11:46:02 crc kubenswrapper[4698]: E1006 11:46:02.795221 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.818254 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.841111 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.860516 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.860748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.860804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.860827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.860857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.860880 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:02Z","lastTransitionTime":"2025-10-06T11:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.887493 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.922666 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:01Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:46:01.388688 6296 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:46:01.388698 6296 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:46:01.388786 6296 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:46:01.388802 6296 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:46:01.388807 6296 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:46:01.388817 6296 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:46:01.388887 6296 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:46:01.388890 6296 factory.go:656] Stopping watch factory\\\\nI1006 11:46:01.388886 6296 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:46:01.389187 6296 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:46:01.389464 6296 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:46:01.389605 6296 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:46:01.389714 6296 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:46:01.389911 6296 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.940689 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.962696 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.964243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.964286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.964304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.964333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.964353 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:02Z","lastTransitionTime":"2025-10-06T11:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:02 crc kubenswrapper[4698]: I1006 11:46:02.986405 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:02Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.008151 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.030360 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.053683 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.068597 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.068705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.068732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.068765 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.068788 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:03Z","lastTransitionTime":"2025-10-06T11:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.072786 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.095815 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.116732 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.136301 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.157753 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.172303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.172388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.172413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.172864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.173125 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:03Z","lastTransitionTime":"2025-10-06T11:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.195717 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.277803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.277877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.277901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.277931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.277967 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:03Z","lastTransitionTime":"2025-10-06T11:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.354972 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.376641 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.381125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.381241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.381262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.381292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.381313 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:03Z","lastTransitionTime":"2025-10-06T11:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.400648 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.433327 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.451285 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.484885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.484939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.484958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.484986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.485006 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:03Z","lastTransitionTime":"2025-10-06T11:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.486685 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.509927 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.532111 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.558446 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.577675 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.588245 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.588324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.588345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.588376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.588398 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:03Z","lastTransitionTime":"2025-10-06T11:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.599800 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.624957 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.648150 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.666173 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.691312 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.691398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.691422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.691457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.691513 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:03Z","lastTransitionTime":"2025-10-06T11:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.696831 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.728294 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:01Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:46:01.388688 6296 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:46:01.388698 6296 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:46:01.388786 6296 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:46:01.388802 6296 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:46:01.388807 6296 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:46:01.388817 6296 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:46:01.388887 6296 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:46:01.388890 6296 factory.go:656] Stopping watch factory\\\\nI1006 11:46:01.388886 6296 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:46:01.389187 6296 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:46:01.389464 6296 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:46:01.389605 6296 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:46:01.389714 6296 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:46:01.389911 6296 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.746621 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:03Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.794165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.794247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.794267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.794301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.794320 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:03Z","lastTransitionTime":"2025-10-06T11:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.898944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.899080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.899098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.899131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:03 crc kubenswrapper[4698]: I1006 11:46:03.899150 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:03Z","lastTransitionTime":"2025-10-06T11:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.003331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.003404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.003422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.003453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.003475 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:04Z","lastTransitionTime":"2025-10-06T11:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.107522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.107757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.107911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.108096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.108243 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:04Z","lastTransitionTime":"2025-10-06T11:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.211515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.211595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.211615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.211643 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.211662 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:04Z","lastTransitionTime":"2025-10-06T11:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.315088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.315164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.315184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.315221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.315242 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:04Z","lastTransitionTime":"2025-10-06T11:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.327905 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.328211 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.328283 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:04 crc kubenswrapper[4698]: E1006 11:46:04.328675 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.328286 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:04 crc kubenswrapper[4698]: E1006 11:46:04.328777 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:04 crc kubenswrapper[4698]: E1006 11:46:04.328964 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:04 crc kubenswrapper[4698]: E1006 11:46:04.329104 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.420117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.420382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.420580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.420764 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.420964 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:04Z","lastTransitionTime":"2025-10-06T11:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.524796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.524857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.524874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.524903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.524922 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:04Z","lastTransitionTime":"2025-10-06T11:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.628958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.629057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.629085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.629120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.629143 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:04Z","lastTransitionTime":"2025-10-06T11:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.732069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.732126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.732143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.732171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.732189 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:04Z","lastTransitionTime":"2025-10-06T11:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.835453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.835518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.835537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.835564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.835588 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:04Z","lastTransitionTime":"2025-10-06T11:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.924184 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:46:04 crc kubenswrapper[4698]: E1006 11:46:04.924524 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:46:36.924463436 +0000 UTC m=+84.337155649 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.938994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.939070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.939090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.939111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:04 crc kubenswrapper[4698]: I1006 11:46:04.939131 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:04Z","lastTransitionTime":"2025-10-06T11:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.025265 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.025324 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.025376 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.025443 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.025493 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.025594 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.025689 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.025706 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.025607 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.025723 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.025878 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.025913 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.025938 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.025751 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs podName:13806999-a8a3-4c95-b41e-6def8c208f4b nodeName:}" failed. No retries permitted until 2025-10-06 11:46:21.025715042 +0000 UTC m=+68.438407245 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs") pod "network-metrics-daemon-v8wrg" (UID: "13806999-a8a3-4c95-b41e-6def8c208f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.025774 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.026141 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:46:37.026116562 +0000 UTC m=+84.438808765 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.026166 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:46:37.026154353 +0000 UTC m=+84.438846566 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.026193 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:46:37.026181394 +0000 UTC m=+84.438873597 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:46:05 crc kubenswrapper[4698]: E1006 11:46:05.026224 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:46:37.026208074 +0000 UTC m=+84.438900277 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.042252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.042328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.042349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.042379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.042401 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:05Z","lastTransitionTime":"2025-10-06T11:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.146200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.146316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.146340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.146380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.146406 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:05Z","lastTransitionTime":"2025-10-06T11:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.250799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.250912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.250939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.250987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.251055 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:05Z","lastTransitionTime":"2025-10-06T11:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.354695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.354762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.354781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.354810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.354831 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:05Z","lastTransitionTime":"2025-10-06T11:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.458626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.458684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.458695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.458716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.458731 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:05Z","lastTransitionTime":"2025-10-06T11:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.563059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.563143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.563163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.563192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.563212 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:05Z","lastTransitionTime":"2025-10-06T11:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.667335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.667400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.667417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.667444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.667463 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:05Z","lastTransitionTime":"2025-10-06T11:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.770397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.770459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.770477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.770505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.770523 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:05Z","lastTransitionTime":"2025-10-06T11:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.873761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.873827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.873844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.873875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.873899 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:05Z","lastTransitionTime":"2025-10-06T11:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.977304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.977370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.977390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.977415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:05 crc kubenswrapper[4698]: I1006 11:46:05.977432 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:05Z","lastTransitionTime":"2025-10-06T11:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.081503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.081612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.081631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.081660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.081680 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:06Z","lastTransitionTime":"2025-10-06T11:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.185435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.185563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.185584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.185631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.185652 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:06Z","lastTransitionTime":"2025-10-06T11:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.289544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.289624 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.289648 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.289682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.289705 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:06Z","lastTransitionTime":"2025-10-06T11:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.328911 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.328965 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.328936 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.328936 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:06 crc kubenswrapper[4698]: E1006 11:46:06.329192 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:06 crc kubenswrapper[4698]: E1006 11:46:06.329300 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:06 crc kubenswrapper[4698]: E1006 11:46:06.329533 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:06 crc kubenswrapper[4698]: E1006 11:46:06.329767 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.393971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.394098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.394121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.394154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.394176 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:06Z","lastTransitionTime":"2025-10-06T11:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.497848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.497958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.497977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.498010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.498075 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:06Z","lastTransitionTime":"2025-10-06T11:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.601905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.601993 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.602080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.602112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.602133 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:06Z","lastTransitionTime":"2025-10-06T11:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.705778 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.705848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.705858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.705880 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.705893 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:06Z","lastTransitionTime":"2025-10-06T11:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.808832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.808907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.808918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.808934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.808945 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:06Z","lastTransitionTime":"2025-10-06T11:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.912447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.912514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.912529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.912551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:06 crc kubenswrapper[4698]: I1006 11:46:06.912591 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:06Z","lastTransitionTime":"2025-10-06T11:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.015567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.015650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.015659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.015686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.015698 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:07Z","lastTransitionTime":"2025-10-06T11:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.119176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.119268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.119304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.119340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.119366 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:07Z","lastTransitionTime":"2025-10-06T11:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.222339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.222439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.222458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.222521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.222547 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:07Z","lastTransitionTime":"2025-10-06T11:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.332959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.333104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.333126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.333149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.333171 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:07Z","lastTransitionTime":"2025-10-06T11:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.437198 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.437273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.437291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.437320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.437339 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:07Z","lastTransitionTime":"2025-10-06T11:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.540750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.540826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.540846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.540876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.540935 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:07Z","lastTransitionTime":"2025-10-06T11:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.645287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.645361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.645380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.645408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.645429 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:07Z","lastTransitionTime":"2025-10-06T11:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.744296 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.750464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.750548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.750570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.750607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.750641 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:07Z","lastTransitionTime":"2025-10-06T11:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.764180 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.766977 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.804653 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.829659 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.850386 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.856274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.856354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.856377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.856408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.856428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:07Z","lastTransitionTime":"2025-10-06T11:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.873258 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.900187 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.927868 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.950078 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.959704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.959810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.959835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.959871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.959894 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:07Z","lastTransitionTime":"2025-10-06T11:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.970508 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:07 crc kubenswrapper[4698]: I1006 11:46:07.995299 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:07Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.031251 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:01Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:46:01.388688 6296 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:46:01.388698 6296 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:46:01.388786 6296 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:46:01.388802 6296 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:46:01.388807 6296 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:46:01.388817 6296 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:46:01.388887 6296 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:46:01.388890 6296 factory.go:656] Stopping watch factory\\\\nI1006 11:46:01.388886 6296 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:46:01.389187 6296 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:46:01.389464 6296 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:46:01.389605 6296 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:46:01.389714 6296 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:46:01.389911 6296 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:08Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.054577 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:08Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.065131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.065234 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.065260 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.065296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.065322 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:08Z","lastTransitionTime":"2025-10-06T11:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.079782 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:08Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.103158 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:08Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.122207 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:08Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.142134 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:08Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.166345 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:08Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.169460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.169542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.169564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.169588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.169604 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:08Z","lastTransitionTime":"2025-10-06T11:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.272967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.273059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.273076 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.273103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.273133 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:08Z","lastTransitionTime":"2025-10-06T11:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.328312 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.328385 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.328320 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.328438 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:08 crc kubenswrapper[4698]: E1006 11:46:08.328619 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:08 crc kubenswrapper[4698]: E1006 11:46:08.328690 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:08 crc kubenswrapper[4698]: E1006 11:46:08.328764 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:08 crc kubenswrapper[4698]: E1006 11:46:08.328827 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.377074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.377152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.377177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.377209 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.377237 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:08Z","lastTransitionTime":"2025-10-06T11:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.480794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.480851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.480906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.480930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.480944 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:08Z","lastTransitionTime":"2025-10-06T11:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.583955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.584032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.584050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.584071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.584085 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:08Z","lastTransitionTime":"2025-10-06T11:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.687732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.687807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.687826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.687857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.687879 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:08Z","lastTransitionTime":"2025-10-06T11:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.791461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.791549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.791563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.791588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.791603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:08Z","lastTransitionTime":"2025-10-06T11:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.895394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.895439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.895449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.895466 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.895476 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:08Z","lastTransitionTime":"2025-10-06T11:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.998752 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.998810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.998826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.998852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:08 crc kubenswrapper[4698]: I1006 11:46:08.998873 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:08Z","lastTransitionTime":"2025-10-06T11:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.102564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.102630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.102644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.102669 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.102684 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:09Z","lastTransitionTime":"2025-10-06T11:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.205418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.205481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.205492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.205516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.205529 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:09Z","lastTransitionTime":"2025-10-06T11:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.308627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.308682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.308692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.308714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.308726 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:09Z","lastTransitionTime":"2025-10-06T11:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.412614 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.412684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.412704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.412779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.412799 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:09Z","lastTransitionTime":"2025-10-06T11:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.516083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.516143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.516157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.516178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.516191 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:09Z","lastTransitionTime":"2025-10-06T11:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.619749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.619847 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.619873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.619913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.619939 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:09Z","lastTransitionTime":"2025-10-06T11:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.723906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.724002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.724171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.724211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.724276 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:09Z","lastTransitionTime":"2025-10-06T11:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.827817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.827949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.827968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.827994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.828065 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:09Z","lastTransitionTime":"2025-10-06T11:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.931334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.931511 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.931542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.931575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:09 crc kubenswrapper[4698]: I1006 11:46:09.931603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:09Z","lastTransitionTime":"2025-10-06T11:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.035631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.035707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.035731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.035770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.035808 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.140061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.140137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.140153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.140197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.140219 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.243785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.243867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.243888 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.243920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.243942 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.328876 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.328871 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:10 crc kubenswrapper[4698]: E1006 11:46:10.329106 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.329064 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.328897 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:10 crc kubenswrapper[4698]: E1006 11:46:10.329311 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:10 crc kubenswrapper[4698]: E1006 11:46:10.329450 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:10 crc kubenswrapper[4698]: E1006 11:46:10.329567 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.347327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.347398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.347419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.347456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.347479 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.451277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.451348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.451367 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.451396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.451416 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.554596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.554689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.554707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.554737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.554757 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.658560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.658620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.658640 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.658671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.658691 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.761868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.761961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.761985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.762024 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.762085 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.798694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.798765 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.798782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.798810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.798830 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: E1006 11:46:10.820529 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:10Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.826618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.826673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.826694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.826720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.826739 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: E1006 11:46:10.845772 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:10Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.851122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.851221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.851241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.851269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.851324 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: E1006 11:46:10.875890 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:10Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.882382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.882439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.882461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.882487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.882509 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: E1006 11:46:10.903270 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:10Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.908890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.908969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.908989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.909026 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.909072 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:10 crc kubenswrapper[4698]: E1006 11:46:10.929401 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:10Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:10 crc kubenswrapper[4698]: E1006 11:46:10.929628 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.932353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.932438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.932456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.932486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:10 crc kubenswrapper[4698]: I1006 11:46:10.932505 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:10Z","lastTransitionTime":"2025-10-06T11:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.035779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.035850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.035867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.035897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.035916 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:11Z","lastTransitionTime":"2025-10-06T11:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.139479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.139539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.139556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.139585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.139608 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:11Z","lastTransitionTime":"2025-10-06T11:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.243757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.243845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.243866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.243896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.243922 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:11Z","lastTransitionTime":"2025-10-06T11:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.347288 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.347364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.347384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.347412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.347432 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:11Z","lastTransitionTime":"2025-10-06T11:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.451663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.451733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.451749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.451774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.451791 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:11Z","lastTransitionTime":"2025-10-06T11:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.555480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.555554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.555572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.555600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.555621 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:11Z","lastTransitionTime":"2025-10-06T11:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.658115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.658208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.658256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.658281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.658311 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:11Z","lastTransitionTime":"2025-10-06T11:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.761699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.761755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.761769 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.761794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.761809 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:11Z","lastTransitionTime":"2025-10-06T11:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.865819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.865884 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.865901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.865932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.865952 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:11Z","lastTransitionTime":"2025-10-06T11:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.971115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.971384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.971406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.971434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:11 crc kubenswrapper[4698]: I1006 11:46:11.971453 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:11Z","lastTransitionTime":"2025-10-06T11:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.075384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.075458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.075484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.075513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.075539 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:12Z","lastTransitionTime":"2025-10-06T11:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.179140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.179219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.179244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.179277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.179304 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:12Z","lastTransitionTime":"2025-10-06T11:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.286402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.286492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.286512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.286542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.286598 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:12Z","lastTransitionTime":"2025-10-06T11:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.328326 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.328402 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.328476 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.328518 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:12 crc kubenswrapper[4698]: E1006 11:46:12.328674 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:12 crc kubenswrapper[4698]: E1006 11:46:12.328862 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:12 crc kubenswrapper[4698]: E1006 11:46:12.329076 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:12 crc kubenswrapper[4698]: E1006 11:46:12.329196 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.390673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.390754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.390774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.390807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.390828 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:12Z","lastTransitionTime":"2025-10-06T11:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.495246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.495323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.495343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.495374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.495395 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:12Z","lastTransitionTime":"2025-10-06T11:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.599705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.599783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.599810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.599842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.599861 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:12Z","lastTransitionTime":"2025-10-06T11:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.703478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.703550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.703571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.703596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.703616 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:12Z","lastTransitionTime":"2025-10-06T11:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.806614 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.806701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.806720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.806748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.806765 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:12Z","lastTransitionTime":"2025-10-06T11:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.910811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.910874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.910893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.910919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:12 crc kubenswrapper[4698]: I1006 11:46:12.910940 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:12Z","lastTransitionTime":"2025-10-06T11:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.014374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.014447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.014468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.014496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.014514 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:13Z","lastTransitionTime":"2025-10-06T11:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.118622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.118709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.118733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.118766 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.118794 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:13Z","lastTransitionTime":"2025-10-06T11:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.227036 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.227131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.227161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.227190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.227213 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:13Z","lastTransitionTime":"2025-10-06T11:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.330696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.331445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.331471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.331500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.331523 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:13Z","lastTransitionTime":"2025-10-06T11:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.347409 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.365787 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.391284 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.438545 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.466488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.466588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.466604 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.466630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.466646 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:13Z","lastTransitionTime":"2025-10-06T11:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.492006 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.518665 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:01Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:46:01.388688 6296 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:46:01.388698 6296 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:46:01.388786 6296 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:46:01.388802 6296 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:46:01.388807 6296 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:46:01.388817 6296 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:46:01.388887 6296 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:46:01.388890 6296 factory.go:656] Stopping watch factory\\\\nI1006 11:46:01.388886 6296 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:46:01.389187 6296 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:46:01.389464 6296 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:46:01.389605 6296 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:46:01.389714 6296 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:46:01.389911 6296 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.530913 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.552420 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.569512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.569586 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.569605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.569641 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.569661 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:13Z","lastTransitionTime":"2025-10-06T11:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.570741 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.588529 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.614622 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.627973 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.661435 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.674051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.674143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.674173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.674210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.674236 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:13Z","lastTransitionTime":"2025-10-06T11:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.680713 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eb27734-83b5-49b3-ab35-3ff7ee5dfcd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.702228 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.720653 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.736807 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.757334 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:13Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.777090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.777154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.777177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.777210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.777228 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:13Z","lastTransitionTime":"2025-10-06T11:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.880653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.880729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.880748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.880777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.880797 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:13Z","lastTransitionTime":"2025-10-06T11:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.984496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.984599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.984621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.984654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:13 crc kubenswrapper[4698]: I1006 11:46:13.984675 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:13Z","lastTransitionTime":"2025-10-06T11:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.087555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.087656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.087691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.087727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.087755 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:14Z","lastTransitionTime":"2025-10-06T11:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.191315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.191402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.191419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.191448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.191471 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:14Z","lastTransitionTime":"2025-10-06T11:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.294571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.294622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.294632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.294651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.294663 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:14Z","lastTransitionTime":"2025-10-06T11:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.328882 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.328957 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.329002 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.329002 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:14 crc kubenswrapper[4698]: E1006 11:46:14.329159 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:14 crc kubenswrapper[4698]: E1006 11:46:14.329409 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:14 crc kubenswrapper[4698]: E1006 11:46:14.329514 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:14 crc kubenswrapper[4698]: E1006 11:46:14.329636 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.330284 4698 scope.go:117] "RemoveContainer" containerID="856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29" Oct 06 11:46:14 crc kubenswrapper[4698]: E1006 11:46:14.330463 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.398749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.398815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.398835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.398860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.398881 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:14Z","lastTransitionTime":"2025-10-06T11:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.502254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.502357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.502379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.502408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.502428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:14Z","lastTransitionTime":"2025-10-06T11:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.605701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.605770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.605794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.605825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.605844 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:14Z","lastTransitionTime":"2025-10-06T11:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.709306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.709382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.709401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.709429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.709450 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:14Z","lastTransitionTime":"2025-10-06T11:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.812977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.813082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.813108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.813150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.813175 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:14Z","lastTransitionTime":"2025-10-06T11:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.917064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.917140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.917159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.917190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:14 crc kubenswrapper[4698]: I1006 11:46:14.917213 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:14Z","lastTransitionTime":"2025-10-06T11:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.020550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.020622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.020642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.020783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.020821 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:15Z","lastTransitionTime":"2025-10-06T11:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.124785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.124864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.124894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.124928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.124955 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:15Z","lastTransitionTime":"2025-10-06T11:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.236724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.236801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.236822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.236852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.236874 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:15Z","lastTransitionTime":"2025-10-06T11:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.339802 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.339884 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.339910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.339941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.339964 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:15Z","lastTransitionTime":"2025-10-06T11:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.442853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.443308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.443531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.443707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.443849 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:15Z","lastTransitionTime":"2025-10-06T11:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.547898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.548338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.548495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.548658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.548804 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:15Z","lastTransitionTime":"2025-10-06T11:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.652019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.652118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.652141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.652170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.652191 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:15Z","lastTransitionTime":"2025-10-06T11:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.756163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.756239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.756254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.756288 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.756306 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:15Z","lastTransitionTime":"2025-10-06T11:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.859439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.859501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.859520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.859546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.859562 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:15Z","lastTransitionTime":"2025-10-06T11:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.964154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.964244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.964264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.964300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:15 crc kubenswrapper[4698]: I1006 11:46:15.964322 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:15Z","lastTransitionTime":"2025-10-06T11:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.068616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.068701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.068725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.068759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.068781 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:16Z","lastTransitionTime":"2025-10-06T11:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.172160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.172547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.172613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.172708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.172778 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:16Z","lastTransitionTime":"2025-10-06T11:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.276970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.277094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.277122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.277153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.277174 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:16Z","lastTransitionTime":"2025-10-06T11:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.328660 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.328714 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.329383 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:16 crc kubenswrapper[4698]: E1006 11:46:16.329587 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:16 crc kubenswrapper[4698]: E1006 11:46:16.329900 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.329935 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:16 crc kubenswrapper[4698]: E1006 11:46:16.330113 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:16 crc kubenswrapper[4698]: E1006 11:46:16.330395 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.380571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.380651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.380671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.380700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.380720 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:16Z","lastTransitionTime":"2025-10-06T11:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.484472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.484532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.484549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.484575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.484598 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:16Z","lastTransitionTime":"2025-10-06T11:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.588331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.588427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.588455 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.588497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.588518 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:16Z","lastTransitionTime":"2025-10-06T11:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.693308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.693377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.693402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.693428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.693453 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:16Z","lastTransitionTime":"2025-10-06T11:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.798608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.799006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.799238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.799413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.799573 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:16Z","lastTransitionTime":"2025-10-06T11:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.904682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.904776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.904796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.904830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:16 crc kubenswrapper[4698]: I1006 11:46:16.904853 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:16Z","lastTransitionTime":"2025-10-06T11:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.008677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.008754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.008773 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.008805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.008825 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:17Z","lastTransitionTime":"2025-10-06T11:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.112828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.112905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.112931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.112965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.112988 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:17Z","lastTransitionTime":"2025-10-06T11:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.217089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.217600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.217744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.217891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.218096 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:17Z","lastTransitionTime":"2025-10-06T11:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.321445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.321523 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.321541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.321576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.321603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:17Z","lastTransitionTime":"2025-10-06T11:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.425575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.425697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.425725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.425781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.425857 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:17Z","lastTransitionTime":"2025-10-06T11:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.529532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.529605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.529622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.529653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.529674 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:17Z","lastTransitionTime":"2025-10-06T11:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.633879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.633958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.633978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.634042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.634072 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:17Z","lastTransitionTime":"2025-10-06T11:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.738430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.738508 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.738527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.738555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.738575 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:17Z","lastTransitionTime":"2025-10-06T11:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.842587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.842663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.842683 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.842713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.842733 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:17Z","lastTransitionTime":"2025-10-06T11:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.946256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.946332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.946353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.946380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:17 crc kubenswrapper[4698]: I1006 11:46:17.946398 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:17Z","lastTransitionTime":"2025-10-06T11:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.050434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.050492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.050504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.050521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.050536 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:18Z","lastTransitionTime":"2025-10-06T11:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.154400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.154474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.154496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.154549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.154569 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:18Z","lastTransitionTime":"2025-10-06T11:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.258016 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.258132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.258156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.258186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.258207 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:18Z","lastTransitionTime":"2025-10-06T11:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.328302 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:18 crc kubenswrapper[4698]: E1006 11:46:18.328552 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.328912 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:18 crc kubenswrapper[4698]: E1006 11:46:18.329019 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.329258 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:18 crc kubenswrapper[4698]: E1006 11:46:18.329351 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.329576 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:18 crc kubenswrapper[4698]: E1006 11:46:18.329690 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.361678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.361738 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.361750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.361771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.361785 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:18Z","lastTransitionTime":"2025-10-06T11:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.465872 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.465933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.465951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.465986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.466067 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:18Z","lastTransitionTime":"2025-10-06T11:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.569820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.569876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.569891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.569916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.569936 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:18Z","lastTransitionTime":"2025-10-06T11:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.673471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.673556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.673577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.673607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.673626 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:18Z","lastTransitionTime":"2025-10-06T11:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.777189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.777269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.777289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.777319 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.777341 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:18Z","lastTransitionTime":"2025-10-06T11:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.880662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.880726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.880744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.880771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.880792 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:18Z","lastTransitionTime":"2025-10-06T11:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.985041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.985102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.985118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.985142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:18 crc kubenswrapper[4698]: I1006 11:46:18.985157 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:18Z","lastTransitionTime":"2025-10-06T11:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.089433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.089509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.089527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.089554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.089572 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:19Z","lastTransitionTime":"2025-10-06T11:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.193716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.193780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.193798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.193825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.193849 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:19Z","lastTransitionTime":"2025-10-06T11:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.297320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.297385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.297408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.297434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.297453 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:19Z","lastTransitionTime":"2025-10-06T11:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.400141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.400197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.400211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.400232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.400246 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:19Z","lastTransitionTime":"2025-10-06T11:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.503043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.503132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.503151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.503182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.503206 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:19Z","lastTransitionTime":"2025-10-06T11:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.606772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.606854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.606878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.606911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.606936 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:19Z","lastTransitionTime":"2025-10-06T11:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.710629 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.710777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.710798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.710825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.710877 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:19Z","lastTransitionTime":"2025-10-06T11:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.814869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.814933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.814953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.814983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.815003 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:19Z","lastTransitionTime":"2025-10-06T11:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.918691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.918759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.918779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.918808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:19 crc kubenswrapper[4698]: I1006 11:46:19.918830 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:19Z","lastTransitionTime":"2025-10-06T11:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.022946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.022995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.023010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.023059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.023077 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:20Z","lastTransitionTime":"2025-10-06T11:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.127408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.127468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.127486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.127512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.127530 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:20Z","lastTransitionTime":"2025-10-06T11:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.232457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.232586 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.232611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.232647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.232680 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:20Z","lastTransitionTime":"2025-10-06T11:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.328297 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.328311 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.328506 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:20 crc kubenswrapper[4698]: E1006 11:46:20.328525 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.328333 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:20 crc kubenswrapper[4698]: E1006 11:46:20.328788 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:20 crc kubenswrapper[4698]: E1006 11:46:20.328948 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:20 crc kubenswrapper[4698]: E1006 11:46:20.329218 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.337097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.337154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.337177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.337209 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.337233 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:20Z","lastTransitionTime":"2025-10-06T11:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.440041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.440084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.440097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.440116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.440130 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:20Z","lastTransitionTime":"2025-10-06T11:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.543043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.543101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.543118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.543142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.543160 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:20Z","lastTransitionTime":"2025-10-06T11:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.646233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.646309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.646329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.646357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.646376 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:20Z","lastTransitionTime":"2025-10-06T11:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.749952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.749997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.750008 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.750044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.750059 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:20Z","lastTransitionTime":"2025-10-06T11:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.852776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.852830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.852841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.852862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.852877 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:20Z","lastTransitionTime":"2025-10-06T11:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.955565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.955615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.955633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.955659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:20 crc kubenswrapper[4698]: I1006 11:46:20.955681 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:20Z","lastTransitionTime":"2025-10-06T11:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.048993 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:21 crc kubenswrapper[4698]: E1006 11:46:21.049298 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:46:21 crc kubenswrapper[4698]: E1006 11:46:21.049455 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs podName:13806999-a8a3-4c95-b41e-6def8c208f4b nodeName:}" failed. No retries permitted until 2025-10-06 11:46:53.04939476 +0000 UTC m=+100.462086973 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs") pod "network-metrics-daemon-v8wrg" (UID: "13806999-a8a3-4c95-b41e-6def8c208f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.059283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.059357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.059375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.059404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.059424 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.131436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.131485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.131504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.131527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.131544 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: E1006 11:46:21.146583 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:21Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.151304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.151342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.151353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.151374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.151389 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: E1006 11:46:21.167193 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:21Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.171241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.171329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.171354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.171375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.171394 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: E1006 11:46:21.184497 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:21Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.188931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.188971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.188989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.189010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.189055 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: E1006 11:46:21.206386 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:21Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.210615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.210699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.210717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.210750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.210767 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: E1006 11:46:21.229589 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:21Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:21 crc kubenswrapper[4698]: E1006 11:46:21.229753 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.231679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.231727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.231743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.231767 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.231788 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.335243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.335298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.335314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.335331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.335344 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.438224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.438271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.438283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.438307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.438322 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.540958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.541061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.541083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.541111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.541130 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.644859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.644929 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.644950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.644983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.645003 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.748693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.748764 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.748785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.748816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.748835 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.852689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.852758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.852777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.852799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.852816 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.956741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.956826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.956852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.956890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:21 crc kubenswrapper[4698]: I1006 11:46:21.956919 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:21Z","lastTransitionTime":"2025-10-06T11:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.060076 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.060136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.060152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.060178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.060196 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:22Z","lastTransitionTime":"2025-10-06T11:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.163796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.163885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.163905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.163940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.163963 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:22Z","lastTransitionTime":"2025-10-06T11:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.267061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.267095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.267109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.267129 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.267141 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:22Z","lastTransitionTime":"2025-10-06T11:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.328277 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:22 crc kubenswrapper[4698]: E1006 11:46:22.328461 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.328549 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:22 crc kubenswrapper[4698]: E1006 11:46:22.328607 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.328663 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:22 crc kubenswrapper[4698]: E1006 11:46:22.328738 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.328875 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:22 crc kubenswrapper[4698]: E1006 11:46:22.329128 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.370364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.370435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.370453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.370990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.371225 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:22Z","lastTransitionTime":"2025-10-06T11:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.475218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.475267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.475286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.475313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.475331 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:22Z","lastTransitionTime":"2025-10-06T11:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.578856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.578918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.578936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.578963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.578980 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:22Z","lastTransitionTime":"2025-10-06T11:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.682846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.682901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.682916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.682940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.682955 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:22Z","lastTransitionTime":"2025-10-06T11:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.786285 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.786333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.786351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.786375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.786394 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:22Z","lastTransitionTime":"2025-10-06T11:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.876589 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f8bs_e581ae92-9ea3-40a6-abd4-09eb81bb5be4/kube-multus/0.log" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.876664 4698 generic.go:334] "Generic (PLEG): container finished" podID="e581ae92-9ea3-40a6-abd4-09eb81bb5be4" containerID="ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696" exitCode=1 Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.876720 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f8bs" event={"ID":"e581ae92-9ea3-40a6-abd4-09eb81bb5be4","Type":"ContainerDied","Data":"ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.877431 4698 scope.go:117] "RemoveContainer" containerID="ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.889284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.889334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.889352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.889377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.889395 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:22Z","lastTransitionTime":"2025-10-06T11:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.891055 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.909240 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"2025-10-06T11:45:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce\\\\n2025-10-06T11:45:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce to /host/opt/cni/bin/\\\\n2025-10-06T11:45:36Z [verbose] multus-daemon started\\\\n2025-10-06T11:45:36Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:46:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.928439 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.941818 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.961625 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.992363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.992543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.992632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.992741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.992830 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:22Z","lastTransitionTime":"2025-10-06T11:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:22 crc kubenswrapper[4698]: I1006 11:46:22.998879 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:01Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:46:01.388688 6296 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:46:01.388698 6296 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:46:01.388786 6296 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:46:01.388802 6296 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:46:01.388807 6296 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:46:01.388817 6296 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:46:01.388887 6296 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:46:01.388890 6296 factory.go:656] Stopping watch factory\\\\nI1006 11:46:01.388886 6296 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:46:01.389187 6296 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:46:01.389464 6296 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:46:01.389605 6296 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:46:01.389714 6296 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:46:01.389911 6296 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:22Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.009235 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.029728 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.045369 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.063386 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.077731 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.090002 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.095737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.095788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.095803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.095824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.095837 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:23Z","lastTransitionTime":"2025-10-06T11:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.117543 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.133502 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eb27734-83b5-49b3-ab35-3ff7ee5dfcd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.147727 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.162575 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.173701 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.185711 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.199102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.199177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.199205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.199238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.199262 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:23Z","lastTransitionTime":"2025-10-06T11:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.302448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.302486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.302494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.302523 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.302535 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:23Z","lastTransitionTime":"2025-10-06T11:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.343396 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.355204 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.372456 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.399905 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:01Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:46:01.388688 6296 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:46:01.388698 6296 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:46:01.388786 6296 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:46:01.388802 6296 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:46:01.388807 6296 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:46:01.388817 6296 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:46:01.388887 6296 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:46:01.388890 6296 factory.go:656] Stopping watch factory\\\\nI1006 11:46:01.388886 6296 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:46:01.389187 6296 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:46:01.389464 6296 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:46:01.389605 6296 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:46:01.389714 6296 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:46:01.389911 6296 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.404919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.404944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.404953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.404968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.404979 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:23Z","lastTransitionTime":"2025-10-06T11:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.412526 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.423522 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.439143 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.450790 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.464189 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.478436 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.493647 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.508292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.508352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.508363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.508383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.508397 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:23Z","lastTransitionTime":"2025-10-06T11:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.517140 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.533069 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eb27734-83b5-49b3-ab35-3ff7ee5dfcd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.552944 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.573323 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.591667 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.608556 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.611347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.611406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.611423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.611452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.611465 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:23Z","lastTransitionTime":"2025-10-06T11:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.628996 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"2025-10-06T11:45:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce\\\\n2025-10-06T11:45:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce to /host/opt/cni/bin/\\\\n2025-10-06T11:45:36Z [verbose] multus-daemon started\\\\n2025-10-06T11:45:36Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:46:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.715388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.715432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.715443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.715461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.715472 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:23Z","lastTransitionTime":"2025-10-06T11:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.818009 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.818097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.818114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.818146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.818166 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:23Z","lastTransitionTime":"2025-10-06T11:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.883070 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f8bs_e581ae92-9ea3-40a6-abd4-09eb81bb5be4/kube-multus/0.log" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.883177 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f8bs" event={"ID":"e581ae92-9ea3-40a6-abd4-09eb81bb5be4","Type":"ContainerStarted","Data":"be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9"} Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.899390 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.911790 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.920753 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.920804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.920816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.920843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.920857 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:23Z","lastTransitionTime":"2025-10-06T11:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.927615 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.948551 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:01Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:46:01.388688 6296 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:46:01.388698 6296 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:46:01.388786 6296 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:46:01.388802 6296 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:46:01.388807 6296 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:46:01.388817 6296 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:46:01.388887 6296 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:46:01.388890 6296 factory.go:656] Stopping watch factory\\\\nI1006 11:46:01.388886 6296 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:46:01.389187 6296 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:46:01.389464 6296 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:46:01.389605 6296 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:46:01.389714 6296 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:46:01.389911 6296 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.961905 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.974513 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:23 crc kubenswrapper[4698]: I1006 11:46:23.989790 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:23Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.008077 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.024209 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.024258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.024269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.024289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.024301 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:24Z","lastTransitionTime":"2025-10-06T11:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.027612 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.041362 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.055293 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.081589 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.097626 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eb27734-83b5-49b3-ab35-3ff7ee5dfcd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.112244 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.126833 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.127981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.128068 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.128089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.128118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.128140 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:24Z","lastTransitionTime":"2025-10-06T11:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.141752 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.158123 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.175483 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"2025-10-06T11:45:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce\\\\n2025-10-06T11:45:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce to /host/opt/cni/bin/\\\\n2025-10-06T11:45:36Z [verbose] multus-daemon started\\\\n2025-10-06T11:45:36Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:46:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:24Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.232201 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.232239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.232250 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.232269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.232287 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:24Z","lastTransitionTime":"2025-10-06T11:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.328777 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:24 crc kubenswrapper[4698]: E1006 11:46:24.329010 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.329159 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.329172 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:24 crc kubenswrapper[4698]: E1006 11:46:24.329225 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.329179 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:24 crc kubenswrapper[4698]: E1006 11:46:24.329478 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:24 crc kubenswrapper[4698]: E1006 11:46:24.329697 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.335675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.335729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.335742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.335762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.335775 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:24Z","lastTransitionTime":"2025-10-06T11:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.439518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.439569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.439579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.439596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.439607 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:24Z","lastTransitionTime":"2025-10-06T11:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.543126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.543199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.543216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.543241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.543256 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:24Z","lastTransitionTime":"2025-10-06T11:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.647140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.647222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.647253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.647289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.647313 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:24Z","lastTransitionTime":"2025-10-06T11:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.750418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.750469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.750484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.750504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.750518 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:24Z","lastTransitionTime":"2025-10-06T11:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.854716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.854811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.854837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.854865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.854883 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:24Z","lastTransitionTime":"2025-10-06T11:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.958301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.958368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.958391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.958421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:24 crc kubenswrapper[4698]: I1006 11:46:24.958441 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:24Z","lastTransitionTime":"2025-10-06T11:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.062143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.062213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.062231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.062259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.062280 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:25Z","lastTransitionTime":"2025-10-06T11:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.165828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.165885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.165895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.165919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.165932 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:25Z","lastTransitionTime":"2025-10-06T11:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.269230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.269304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.269320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.269349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.269369 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:25Z","lastTransitionTime":"2025-10-06T11:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.372390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.372462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.372481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.372510 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.372529 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:25Z","lastTransitionTime":"2025-10-06T11:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.475759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.475821 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.475856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.475889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.475909 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:25Z","lastTransitionTime":"2025-10-06T11:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.580186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.580272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.580302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.580339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.580360 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:25Z","lastTransitionTime":"2025-10-06T11:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.683227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.683280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.683296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.683320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.683337 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:25Z","lastTransitionTime":"2025-10-06T11:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.786853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.786926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.786939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.786959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.786972 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:25Z","lastTransitionTime":"2025-10-06T11:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.890155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.890247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.890274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.890304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.890324 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:25Z","lastTransitionTime":"2025-10-06T11:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.994920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.995003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.995049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.995083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:25 crc kubenswrapper[4698]: I1006 11:46:25.995104 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:25Z","lastTransitionTime":"2025-10-06T11:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.098627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.098696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.098713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.099222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.099282 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:26Z","lastTransitionTime":"2025-10-06T11:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.202745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.202858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.202883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.202924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.202949 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:26Z","lastTransitionTime":"2025-10-06T11:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.306887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.306975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.306999 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.307061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.307083 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:26Z","lastTransitionTime":"2025-10-06T11:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.328856 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.328917 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.328947 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:26 crc kubenswrapper[4698]: E1006 11:46:26.329129 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.329195 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:26 crc kubenswrapper[4698]: E1006 11:46:26.329295 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:26 crc kubenswrapper[4698]: E1006 11:46:26.329401 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:26 crc kubenswrapper[4698]: E1006 11:46:26.329735 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.330801 4698 scope.go:117] "RemoveContainer" containerID="856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.410064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.410111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.410124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.410146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.410161 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:26Z","lastTransitionTime":"2025-10-06T11:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.514410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.514487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.514508 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.514539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.514557 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:26Z","lastTransitionTime":"2025-10-06T11:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.617875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.617925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.617936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.617954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.617966 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:26Z","lastTransitionTime":"2025-10-06T11:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.721191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.721249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.721270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.721298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.721318 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:26Z","lastTransitionTime":"2025-10-06T11:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.824106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.824161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.824171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.824195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.824208 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:26Z","lastTransitionTime":"2025-10-06T11:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.903044 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/2.log" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.906369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.906756 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.923135 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.926789 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.926824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.926834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.926853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.926867 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:26Z","lastTransitionTime":"2025-10-06T11:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.937723 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.964320 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.977464 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eb27734-83b5-49b3-ab35-3ff7ee5dfcd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.987924 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:26 crc kubenswrapper[4698]: I1006 11:46:26.999014 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:26Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.012481 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.025635 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"2025-10-06T11:45:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce\\\\n2025-10-06T11:45:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce to /host/opt/cni/bin/\\\\n2025-10-06T11:45:36Z [verbose] multus-daemon started\\\\n2025-10-06T11:45:36Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:46:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.030062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.030114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.030123 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.030140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.030152 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:27Z","lastTransitionTime":"2025-10-06T11:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.036339 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.048331 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.057855 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.072859 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.094409 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:01Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:46:01.388688 6296 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:46:01.388698 6296 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:46:01.388786 6296 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:46:01.388802 6296 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:46:01.388807 6296 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:46:01.388817 6296 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:46:01.388887 6296 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:46:01.388890 6296 factory.go:656] Stopping watch factory\\\\nI1006 11:46:01.388886 6296 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:46:01.389187 6296 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:46:01.389464 6296 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:46:01.389605 6296 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:46:01.389714 6296 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:46:01.389911 6296 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.114598 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.128594 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.132865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.132901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.132917 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.132938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.132952 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:27Z","lastTransitionTime":"2025-10-06T11:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.142753 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.162393 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.181972 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.236608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.236691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.236712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.236741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.236762 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:27Z","lastTransitionTime":"2025-10-06T11:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.340000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.340091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.340109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.340142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.340160 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:27Z","lastTransitionTime":"2025-10-06T11:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.443668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.443729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.443747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.443775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.443795 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:27Z","lastTransitionTime":"2025-10-06T11:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.547101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.547633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.547797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.548011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.548206 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:27Z","lastTransitionTime":"2025-10-06T11:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.651779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.651846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.651858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.651882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.651896 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:27Z","lastTransitionTime":"2025-10-06T11:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.756324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.756408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.756438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.756476 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.756506 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:27Z","lastTransitionTime":"2025-10-06T11:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.860626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.860729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.860755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.860788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.860814 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:27Z","lastTransitionTime":"2025-10-06T11:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.912987 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/3.log" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.913998 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/2.log" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.918254 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" exitCode=1 Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.918307 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.918350 4698 scope.go:117] "RemoveContainer" containerID="856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.919976 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:46:27 crc kubenswrapper[4698]: E1006 11:46:27.920378 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.936770 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.960229 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.964758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.964802 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.964814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.964834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.964846 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:27Z","lastTransitionTime":"2025-10-06T11:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:27 crc kubenswrapper[4698]: I1006 11:46:27.993461 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856700c98df1bc22a3ee1e2505ae71861ca0106e77260962c58290feaf2bda29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:01Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1006 11:46:01.388688 6296 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1006 11:46:01.388698 6296 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 11:46:01.388721 6296 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1006 11:46:01.388786 6296 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 11:46:01.388802 6296 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 11:46:01.388807 6296 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1006 11:46:01.388817 6296 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 11:46:01.388887 6296 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 11:46:01.388890 6296 factory.go:656] Stopping watch factory\\\\nI1006 11:46:01.388886 6296 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 11:46:01.389187 6296 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1006 11:46:01.389464 6296 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1006 11:46:01.389605 6296 ovnkube.go:599] Stopped ovnkube\\\\nI1006 11:46:01.389714 6296 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1006 11:46:01.389911 6296 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:27Z\\\",\\\"message\\\":\\\"ssful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sz4ws after 0 failed attempt(s)\\\\nI1006 11:46:27.407911 6606 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sz4ws\\\\nI1006 11:46:27.407909 6606 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nI1006 11:46:27.407927 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nI1006 11:46:27.407944 6606 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc in node crc\\\\nI1006 11:46:27.407956 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc after 0 failed attempt(s)\\\\nI1006 11:46:27.407964 6606 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nF1006 11:46:27.407961 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:27Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.012081 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.034137 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.057748 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.067657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.067893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.068118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.068286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.068418 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:28Z","lastTransitionTime":"2025-10-06T11:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.080288 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.101680 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.122518 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.141538 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.160256 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eb27734-83b5-49b3-ab35-3ff7ee5dfcd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.171963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.172061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.172090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.172126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.172151 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:28Z","lastTransitionTime":"2025-10-06T11:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.178204 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.191370 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.204399 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.219832 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.251066 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.272246 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"2025-10-06T11:45:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce\\\\n2025-10-06T11:45:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce to /host/opt/cni/bin/\\\\n2025-10-06T11:45:36Z [verbose] multus-daemon started\\\\n2025-10-06T11:45:36Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:46:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.280894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.281055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.281112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.281148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.281180 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:28Z","lastTransitionTime":"2025-10-06T11:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.307150 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.328786 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.328873 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.328998 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:28 crc kubenswrapper[4698]: E1006 11:46:28.328988 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:28 crc kubenswrapper[4698]: E1006 11:46:28.329208 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.328804 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:28 crc kubenswrapper[4698]: E1006 11:46:28.329327 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:28 crc kubenswrapper[4698]: E1006 11:46:28.329405 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.384752 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.384820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.384840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.384865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.384884 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:28Z","lastTransitionTime":"2025-10-06T11:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.487983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.488099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.488120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.488147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.488165 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:28Z","lastTransitionTime":"2025-10-06T11:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.590979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.591050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.591066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.591092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.591110 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:28Z","lastTransitionTime":"2025-10-06T11:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.695133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.695277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.695302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.695369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.695395 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:28Z","lastTransitionTime":"2025-10-06T11:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.801184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.801248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.801266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.801293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.801314 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:28Z","lastTransitionTime":"2025-10-06T11:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.904871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.904958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.904983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.905069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.905101 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:28Z","lastTransitionTime":"2025-10-06T11:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.927406 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/3.log" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.933675 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:46:28 crc kubenswrapper[4698]: E1006 11:46:28.933989 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.958485 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:28 crc kubenswrapper[4698]: I1006 11:46:28.987980 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"2025-10-06T11:45:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce\\\\n2025-10-06T11:45:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce to /host/opt/cni/bin/\\\\n2025-10-06T11:45:36Z [verbose] multus-daemon started\\\\n2025-10-06T11:45:36Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:46:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:28Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.003147 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.008528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.008587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.008606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.008636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.008657 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:29Z","lastTransitionTime":"2025-10-06T11:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.023605 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.039815 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.057068 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.081104 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:27Z\\\",\\\"message\\\":\\\"ssful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sz4ws after 0 failed attempt(s)\\\\nI1006 11:46:27.407911 6606 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sz4ws\\\\nI1006 11:46:27.407909 6606 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nI1006 11:46:27.407927 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nI1006 11:46:27.407944 6606 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc in node crc\\\\nI1006 11:46:27.407956 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc after 0 failed attempt(s)\\\\nI1006 11:46:27.407964 6606 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nF1006 11:46:27.407961 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.101318 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.112307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.112411 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.112435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.112462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.112481 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:29Z","lastTransitionTime":"2025-10-06T11:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.115383 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.138777 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.160769 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.182573 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.202200 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.216258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.216712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.216829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.216962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.217131 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:29Z","lastTransitionTime":"2025-10-06T11:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.219522 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.255931 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.279467 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eb27734-83b5-49b3-ab35-3ff7ee5dfcd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.297089 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.313310 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:29Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.320586 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.320620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.320632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.320652 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.320666 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:29Z","lastTransitionTime":"2025-10-06T11:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.424081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.424141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.424166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.424197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.424219 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:29Z","lastTransitionTime":"2025-10-06T11:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.527734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.528417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.528612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.528775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.528927 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:29Z","lastTransitionTime":"2025-10-06T11:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.632751 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.632816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.632835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.632861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.632881 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:29Z","lastTransitionTime":"2025-10-06T11:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.736056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.736151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.736174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.736206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.736226 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:29Z","lastTransitionTime":"2025-10-06T11:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.839934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.839997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.840040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.840066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.840088 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:29Z","lastTransitionTime":"2025-10-06T11:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.943502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.943567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.944201 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.944273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:29 crc kubenswrapper[4698]: I1006 11:46:29.944293 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:29Z","lastTransitionTime":"2025-10-06T11:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.048397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.048451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.048469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.048500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.048520 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:30Z","lastTransitionTime":"2025-10-06T11:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.152321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.152393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.152418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.152455 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.152479 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:30Z","lastTransitionTime":"2025-10-06T11:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.256148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.256202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.256220 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.256246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.256926 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:30Z","lastTransitionTime":"2025-10-06T11:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.328312 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.328415 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.328499 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.328336 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:30 crc kubenswrapper[4698]: E1006 11:46:30.328521 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:30 crc kubenswrapper[4698]: E1006 11:46:30.328733 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:30 crc kubenswrapper[4698]: E1006 11:46:30.328903 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:30 crc kubenswrapper[4698]: E1006 11:46:30.329147 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.359727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.359829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.359861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.359895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.359920 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:30Z","lastTransitionTime":"2025-10-06T11:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.463143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.463199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.463217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.463240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.463259 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:30Z","lastTransitionTime":"2025-10-06T11:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.566631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.566688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.566704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.566727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.566746 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:30Z","lastTransitionTime":"2025-10-06T11:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.670547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.670657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.670685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.670717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.670744 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:30Z","lastTransitionTime":"2025-10-06T11:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.773487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.773546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.773564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.773589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.773639 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:30Z","lastTransitionTime":"2025-10-06T11:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.877064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.877124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.877146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.877175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.877198 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:30Z","lastTransitionTime":"2025-10-06T11:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.979609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.979689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.979714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.979749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:30 crc kubenswrapper[4698]: I1006 11:46:30.979776 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:30Z","lastTransitionTime":"2025-10-06T11:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.083309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.083374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.083391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.083416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.083433 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.186842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.186898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.186917 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.186944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.186968 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.272800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.272868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.272885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.272910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.272931 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: E1006 11:46:31.295999 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:31Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.301258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.301303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.301322 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.301345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.301365 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: E1006 11:46:31.323386 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:31Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.328834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.328920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.328968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.328997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.329062 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: E1006 11:46:31.353394 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:31Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.359137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.359192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.359210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.359235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.359259 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: E1006 11:46:31.381411 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:31Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.386675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.386779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.386857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.386938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.387070 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: E1006 11:46:31.419952 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:31Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:31 crc kubenswrapper[4698]: E1006 11:46:31.420259 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.423462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.423517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.423541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.423575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.423601 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.527579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.527687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.527714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.527750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.527775 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.631242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.631309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.631327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.631358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.631379 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.734090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.734151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.734173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.734199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.734218 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.838117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.838197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.838235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.838268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.838290 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.941368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.941426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.941443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.941471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:31 crc kubenswrapper[4698]: I1006 11:46:31.941490 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:31Z","lastTransitionTime":"2025-10-06T11:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.045312 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.045375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.045395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.045423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.045447 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:32Z","lastTransitionTime":"2025-10-06T11:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.149167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.149238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.149256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.149286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.149308 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:32Z","lastTransitionTime":"2025-10-06T11:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.253064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.253150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.253170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.253197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.253214 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:32Z","lastTransitionTime":"2025-10-06T11:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.328091 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.328200 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:32 crc kubenswrapper[4698]: E1006 11:46:32.328288 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.328308 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.328336 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:32 crc kubenswrapper[4698]: E1006 11:46:32.328518 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:32 crc kubenswrapper[4698]: E1006 11:46:32.328686 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:32 crc kubenswrapper[4698]: E1006 11:46:32.328817 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.356625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.357167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.357340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.357482 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.357641 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:32Z","lastTransitionTime":"2025-10-06T11:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.462427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.462497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.462519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.462547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.462569 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:32Z","lastTransitionTime":"2025-10-06T11:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.566229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.566307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.566325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.566359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.566384 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:32Z","lastTransitionTime":"2025-10-06T11:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.669444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.669486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.669496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.669512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.669526 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:32Z","lastTransitionTime":"2025-10-06T11:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.772916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.772985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.773004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.773054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.773075 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:32Z","lastTransitionTime":"2025-10-06T11:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.877462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.877526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.877540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.877560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.877575 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:32Z","lastTransitionTime":"2025-10-06T11:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.980512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.980573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.980590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.980616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:32 crc kubenswrapper[4698]: I1006 11:46:32.980643 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:32Z","lastTransitionTime":"2025-10-06T11:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.085239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.085325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.085348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.085382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.085406 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:33Z","lastTransitionTime":"2025-10-06T11:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.188666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.188741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.188763 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.188799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.188818 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:33Z","lastTransitionTime":"2025-10-06T11:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.292326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.292398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.292419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.292448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.292467 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:33Z","lastTransitionTime":"2025-10-06T11:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.354281 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.378461 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"2025-10-06T11:45:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce\\\\n2025-10-06T11:45:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce to /host/opt/cni/bin/\\\\n2025-10-06T11:45:36Z [verbose] multus-daemon started\\\\n2025-10-06T11:45:36Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:46:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.397302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.397375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.397395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.397427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.397449 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:33Z","lastTransitionTime":"2025-10-06T11:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.401822 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.420567 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.447426 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.475904 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:27Z\\\",\\\"message\\\":\\\"ssful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sz4ws after 0 failed attempt(s)\\\\nI1006 11:46:27.407911 6606 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sz4ws\\\\nI1006 11:46:27.407909 6606 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nI1006 11:46:27.407927 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nI1006 11:46:27.407944 6606 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc in node crc\\\\nI1006 11:46:27.407956 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc after 0 failed attempt(s)\\\\nI1006 11:46:27.407964 6606 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nF1006 11:46:27.407961 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.494918 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.500735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.500804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.500832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.500867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.500894 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:33Z","lastTransitionTime":"2025-10-06T11:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.520460 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.542368 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.565802 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.593995 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.604849 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.605125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.605303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.605463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.605623 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:33Z","lastTransitionTime":"2025-10-06T11:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.612480 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.646401 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.667487 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eb27734-83b5-49b3-ab35-3ff7ee5dfcd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.688315 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.707526 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.709325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.709479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.709573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.709676 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.709777 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:33Z","lastTransitionTime":"2025-10-06T11:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.724389 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.740621 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:33Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.812611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.812651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.812664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.812681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.812690 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:33Z","lastTransitionTime":"2025-10-06T11:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.918412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.918489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.918507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.918533 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:33 crc kubenswrapper[4698]: I1006 11:46:33.918556 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:33Z","lastTransitionTime":"2025-10-06T11:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.021216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.021290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.021307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.021335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.021353 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:34Z","lastTransitionTime":"2025-10-06T11:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.125824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.125901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.125925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.125962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.125990 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:34Z","lastTransitionTime":"2025-10-06T11:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.229768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.229842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.229864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.229894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.229911 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:34Z","lastTransitionTime":"2025-10-06T11:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.327973 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.328151 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.328209 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.328509 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:34 crc kubenswrapper[4698]: E1006 11:46:34.328489 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:34 crc kubenswrapper[4698]: E1006 11:46:34.328910 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:34 crc kubenswrapper[4698]: E1006 11:46:34.329153 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:34 crc kubenswrapper[4698]: E1006 11:46:34.329239 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.333597 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.333654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.333674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.333702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.333775 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:34Z","lastTransitionTime":"2025-10-06T11:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.345122 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.438152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.438278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.438299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.438326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.438348 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:34Z","lastTransitionTime":"2025-10-06T11:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.541977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.542074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.542094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.542122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.542143 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:34Z","lastTransitionTime":"2025-10-06T11:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.645650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.645721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.645742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.645776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.645796 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:34Z","lastTransitionTime":"2025-10-06T11:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.751184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.751292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.751344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.751377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.751445 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:34Z","lastTransitionTime":"2025-10-06T11:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.856008 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.856189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.856257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.856289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.856307 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:34Z","lastTransitionTime":"2025-10-06T11:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.959102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.959220 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.959282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.959308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:34 crc kubenswrapper[4698]: I1006 11:46:34.959365 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:34Z","lastTransitionTime":"2025-10-06T11:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.063006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.063100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.063110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.063127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.063139 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:35Z","lastTransitionTime":"2025-10-06T11:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.166217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.166276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.166286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.166311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.166323 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:35Z","lastTransitionTime":"2025-10-06T11:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.270085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.270149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.270167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.270193 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.270213 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:35Z","lastTransitionTime":"2025-10-06T11:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.374065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.374153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.374177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.374216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.374247 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:35Z","lastTransitionTime":"2025-10-06T11:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.477585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.477658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.477683 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.477717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.477736 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:35Z","lastTransitionTime":"2025-10-06T11:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.581203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.581304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.581326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.581358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.581377 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:35Z","lastTransitionTime":"2025-10-06T11:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.684471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.684544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.684562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.684589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.684608 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:35Z","lastTransitionTime":"2025-10-06T11:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.787661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.787723 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.787742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.787769 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.787788 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:35Z","lastTransitionTime":"2025-10-06T11:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.891878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.891960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.892003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.892089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.892109 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:35Z","lastTransitionTime":"2025-10-06T11:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.995182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.995256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.995274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.995302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:35 crc kubenswrapper[4698]: I1006 11:46:35.995324 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:35Z","lastTransitionTime":"2025-10-06T11:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.098952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.099003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.099035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.099055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.099069 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:36Z","lastTransitionTime":"2025-10-06T11:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.202102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.202186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.202203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.202233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.202260 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:36Z","lastTransitionTime":"2025-10-06T11:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.305437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.305484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.305502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.305529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.305553 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:36Z","lastTransitionTime":"2025-10-06T11:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.328767 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.328784 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:36 crc kubenswrapper[4698]: E1006 11:46:36.328977 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.329041 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:36 crc kubenswrapper[4698]: E1006 11:46:36.329228 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.329253 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:36 crc kubenswrapper[4698]: E1006 11:46:36.329413 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:36 crc kubenswrapper[4698]: E1006 11:46:36.329531 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.409040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.409109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.409138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.409172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.409197 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:36Z","lastTransitionTime":"2025-10-06T11:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.512855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.512923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.512942 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.512971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.512991 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:36Z","lastTransitionTime":"2025-10-06T11:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.615763 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.615832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.615851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.615879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.615898 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:36Z","lastTransitionTime":"2025-10-06T11:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.718933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.719007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.719072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.719110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.719137 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:36Z","lastTransitionTime":"2025-10-06T11:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.821857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.821996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.822067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.822096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.822157 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:36Z","lastTransitionTime":"2025-10-06T11:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.925440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.925506 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.925524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.925555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.925575 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:36Z","lastTransitionTime":"2025-10-06T11:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:36 crc kubenswrapper[4698]: I1006 11:46:36.953379 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:46:36 crc kubenswrapper[4698]: E1006 11:46:36.953566 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:40.953531777 +0000 UTC m=+148.366223980 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.028723 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.028774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.028785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.028804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.028815 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:37Z","lastTransitionTime":"2025-10-06T11:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.054967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.055255 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.055295 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.055335 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.055508 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.055530 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.055542 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.055561 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.055709 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.055609 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.055590764 +0000 UTC m=+148.468282937 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.055769 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.055760 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.056235 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.055778209 +0000 UTC m=+148.468470382 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.057295 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.057246707 +0000 UTC m=+148.469938910 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.057486 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:46:37 crc kubenswrapper[4698]: E1006 11:46:37.057580 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.057543756 +0000 UTC m=+148.470235969 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.132968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.133081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.133103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.133134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.133154 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:37Z","lastTransitionTime":"2025-10-06T11:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.236636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.236704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.236721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.236749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.236797 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:37Z","lastTransitionTime":"2025-10-06T11:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.339848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.339970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.339988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.340046 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.340066 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:37Z","lastTransitionTime":"2025-10-06T11:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.444418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.444507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.444532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.444566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.444586 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:37Z","lastTransitionTime":"2025-10-06T11:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.548762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.548833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.548853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.548879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.548897 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:37Z","lastTransitionTime":"2025-10-06T11:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.651674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.651725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.651739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.651757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.651770 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:37Z","lastTransitionTime":"2025-10-06T11:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.756181 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.756262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.756290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.756390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.756478 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:37Z","lastTransitionTime":"2025-10-06T11:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.861729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.861800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.861818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.861845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.861863 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:37Z","lastTransitionTime":"2025-10-06T11:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.966965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.967301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.967326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.967360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:37 crc kubenswrapper[4698]: I1006 11:46:37.967391 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:37Z","lastTransitionTime":"2025-10-06T11:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.070724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.071388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.071412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.071447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.071467 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:38Z","lastTransitionTime":"2025-10-06T11:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.176716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.177316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.177437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.177574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.177699 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:38Z","lastTransitionTime":"2025-10-06T11:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.281377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.281734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.281868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.282121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.282289 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:38Z","lastTransitionTime":"2025-10-06T11:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.328280 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.328484 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.328620 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:38 crc kubenswrapper[4698]: E1006 11:46:38.328608 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.328714 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:38 crc kubenswrapper[4698]: E1006 11:46:38.329004 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:38 crc kubenswrapper[4698]: E1006 11:46:38.329079 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:38 crc kubenswrapper[4698]: E1006 11:46:38.329144 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.385713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.385790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.385810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.385844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.385867 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:38Z","lastTransitionTime":"2025-10-06T11:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.489631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.489685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.489705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.489726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.489746 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:38Z","lastTransitionTime":"2025-10-06T11:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.593259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.593332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.593351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.593379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.593400 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:38Z","lastTransitionTime":"2025-10-06T11:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.697792 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.697852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.697872 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.697900 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.697918 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:38Z","lastTransitionTime":"2025-10-06T11:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.801379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.801444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.801454 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.801473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.801484 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:38Z","lastTransitionTime":"2025-10-06T11:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.904812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.904878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.904897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.904930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:38 crc kubenswrapper[4698]: I1006 11:46:38.904950 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:38Z","lastTransitionTime":"2025-10-06T11:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.008589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.008673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.008692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.008722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.008742 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:39Z","lastTransitionTime":"2025-10-06T11:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.113214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.113289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.113307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.113340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.113359 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:39Z","lastTransitionTime":"2025-10-06T11:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.216829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.216903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.216922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.216953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.216978 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:39Z","lastTransitionTime":"2025-10-06T11:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.320539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.320628 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.320654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.320701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.320733 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:39Z","lastTransitionTime":"2025-10-06T11:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.424678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.424764 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.424783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.424818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.424842 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:39Z","lastTransitionTime":"2025-10-06T11:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.529392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.529486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.529507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.529541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.529564 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:39Z","lastTransitionTime":"2025-10-06T11:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.633462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.633531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.633553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.633582 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.633603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:39Z","lastTransitionTime":"2025-10-06T11:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.737221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.737300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.737324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.737357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.737378 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:39Z","lastTransitionTime":"2025-10-06T11:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.842773 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.842872 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.842892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.843052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.843087 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:39Z","lastTransitionTime":"2025-10-06T11:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.947492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.947580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.947598 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.947626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:39 crc kubenswrapper[4698]: I1006 11:46:39.947647 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:39Z","lastTransitionTime":"2025-10-06T11:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.051310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.051375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.051392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.051417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.051437 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:40Z","lastTransitionTime":"2025-10-06T11:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.155659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.155759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.155785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.155823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.155853 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:40Z","lastTransitionTime":"2025-10-06T11:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.260973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.261110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.261325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.261363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.261389 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:40Z","lastTransitionTime":"2025-10-06T11:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.328308 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.328375 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.328391 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.328443 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:40 crc kubenswrapper[4698]: E1006 11:46:40.328562 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:40 crc kubenswrapper[4698]: E1006 11:46:40.328712 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:40 crc kubenswrapper[4698]: E1006 11:46:40.328860 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:40 crc kubenswrapper[4698]: E1006 11:46:40.328990 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.365098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.365165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.365186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.365214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.365234 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:40Z","lastTransitionTime":"2025-10-06T11:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.468928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.469118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.469146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.469219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.469244 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:40Z","lastTransitionTime":"2025-10-06T11:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.573206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.573287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.573310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.573348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.573373 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:40Z","lastTransitionTime":"2025-10-06T11:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.677405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.677466 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.677528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.677559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.677584 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:40Z","lastTransitionTime":"2025-10-06T11:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.781631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.781776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.781796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.781858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.781882 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:40Z","lastTransitionTime":"2025-10-06T11:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.885877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.886523 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.886658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.886807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.886933 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:40Z","lastTransitionTime":"2025-10-06T11:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.990101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.990586 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.990744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.990875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:40 crc kubenswrapper[4698]: I1006 11:46:40.991058 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:40Z","lastTransitionTime":"2025-10-06T11:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.095522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.095876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.096073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.096285 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.096463 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.199999 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.200497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.200674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.200909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.201173 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.303687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.303766 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.303784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.303805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.303822 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.330653 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:46:41 crc kubenswrapper[4698]: E1006 11:46:41.331052 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.407121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.407205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.407226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.407248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.407268 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.510074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.510125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.510142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.510168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.510187 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.613219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.613347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.613370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.613403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.613427 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.634938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.634992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.635009 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.635059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.635077 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: E1006 11:46:41.656283 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.661933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.662069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.662092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.662114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.662178 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: E1006 11:46:41.683220 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.688616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.688678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.688704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.688741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.688767 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: E1006 11:46:41.709119 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.715002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.715107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.715127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.715161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.715180 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: E1006 11:46:41.734839 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.740096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.740159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.740182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.740235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.740254 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: E1006 11:46:41.760238 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:41Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:41 crc kubenswrapper[4698]: E1006 11:46:41.760547 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.763006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.763158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.763177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.763219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.763240 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.866899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.867346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.867493 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.867632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.867753 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.971964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.972265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.972467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.972623 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:41 crc kubenswrapper[4698]: I1006 11:46:41.972761 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:41Z","lastTransitionTime":"2025-10-06T11:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.076487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.076570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.076589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.076624 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.076644 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:42Z","lastTransitionTime":"2025-10-06T11:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.180293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.180376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.180395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.180424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.180445 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:42Z","lastTransitionTime":"2025-10-06T11:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.284101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.284186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.284214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.284244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.284265 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:42Z","lastTransitionTime":"2025-10-06T11:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.328261 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.328284 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.328388 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.328462 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:42 crc kubenswrapper[4698]: E1006 11:46:42.328667 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:42 crc kubenswrapper[4698]: E1006 11:46:42.328886 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:42 crc kubenswrapper[4698]: E1006 11:46:42.329158 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:42 crc kubenswrapper[4698]: E1006 11:46:42.329297 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.387542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.387584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.387600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.387620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.387637 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:42Z","lastTransitionTime":"2025-10-06T11:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.491199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.491268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.491286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.491314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.491335 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:42Z","lastTransitionTime":"2025-10-06T11:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.594644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.594730 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.594751 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.594783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.594803 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:42Z","lastTransitionTime":"2025-10-06T11:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.698339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.698428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.698457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.698497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.698520 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:42Z","lastTransitionTime":"2025-10-06T11:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.802922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.803000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.803046 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.803083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.803109 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:42Z","lastTransitionTime":"2025-10-06T11:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.906176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.906248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.906270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.906299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:42 crc kubenswrapper[4698]: I1006 11:46:42.906318 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:42Z","lastTransitionTime":"2025-10-06T11:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.009275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.009346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.009364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.009392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.009412 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:43Z","lastTransitionTime":"2025-10-06T11:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.112744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.112819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.112837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.112869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.112899 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:43Z","lastTransitionTime":"2025-10-06T11:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.215442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.215539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.215563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.215603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.215632 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:43Z","lastTransitionTime":"2025-10-06T11:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.319289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.319383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.319411 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.319478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.319512 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:43Z","lastTransitionTime":"2025-10-06T11:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.370623 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.389522 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eb27734-83b5-49b3-ab35-3ff7ee5dfcd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.410787 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.422952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.423090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.423118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.423192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.423215 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:43Z","lastTransitionTime":"2025-10-06T11:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.431735 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.452559 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.469584 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.480793 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.502570 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"2025-10-06T11:45:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce\\\\n2025-10-06T11:45:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce to /host/opt/cni/bin/\\\\n2025-10-06T11:45:36Z [verbose] multus-daemon started\\\\n2025-10-06T11:45:36Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:46:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.522834 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.527424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.527494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.527516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.527560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.527583 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:43Z","lastTransitionTime":"2025-10-06T11:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.541008 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.567479 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.600080 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:27Z\\\",\\\"message\\\":\\\"ssful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sz4ws after 0 failed attempt(s)\\\\nI1006 11:46:27.407911 6606 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sz4ws\\\\nI1006 11:46:27.407909 6606 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nI1006 11:46:27.407927 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nI1006 11:46:27.407944 6606 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc in node crc\\\\nI1006 11:46:27.407956 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc after 0 failed attempt(s)\\\\nI1006 11:46:27.407964 6606 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nF1006 11:46:27.407961 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.615424 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.629973 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd7da2a-5593-4582-bd6f-696e84df57ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3c92b853936feb3512efe9fd7d07aacc3495f7d64d4d9f6a73a5317b3613440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d272e8233fc2a8cfb09be447adcaa1dcef994d8d25f094b839f02934a6b01989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d272e8233fc2a8cfb09be447adcaa1dcef994d8d25f094b839f02934a6b01989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.632531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.632875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.638289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.638342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.638371 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:43Z","lastTransitionTime":"2025-10-06T11:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.656430 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.674503 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.693606 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.713429 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.729917 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:43Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.742388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.742522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.742613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.742712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.742795 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:43Z","lastTransitionTime":"2025-10-06T11:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.845248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.845560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.845706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.845834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.845951 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:43Z","lastTransitionTime":"2025-10-06T11:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.949233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.949295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.949313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.949340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:43 crc kubenswrapper[4698]: I1006 11:46:43.949360 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:43Z","lastTransitionTime":"2025-10-06T11:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.052588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.052883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.052994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.053159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.053287 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:44Z","lastTransitionTime":"2025-10-06T11:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.157732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.158121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.158235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.158337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.158442 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:44Z","lastTransitionTime":"2025-10-06T11:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.262803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.262887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.262908 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.262943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.262964 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:44Z","lastTransitionTime":"2025-10-06T11:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.328646 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.328689 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.328688 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.328668 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:44 crc kubenswrapper[4698]: E1006 11:46:44.328883 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:44 crc kubenswrapper[4698]: E1006 11:46:44.329037 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:44 crc kubenswrapper[4698]: E1006 11:46:44.329150 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:44 crc kubenswrapper[4698]: E1006 11:46:44.329236 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.366491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.366549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.366566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.366590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.366614 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:44Z","lastTransitionTime":"2025-10-06T11:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.469633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.469957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.470157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.470321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.470452 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:44Z","lastTransitionTime":"2025-10-06T11:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.574610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.574685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.574702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.574731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.574751 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:44Z","lastTransitionTime":"2025-10-06T11:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.678495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.678561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.678580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.678611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.678636 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:44Z","lastTransitionTime":"2025-10-06T11:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.782559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.782653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.782684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.782719 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.782742 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:44Z","lastTransitionTime":"2025-10-06T11:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.887134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.887208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.887232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.887263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.887281 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:44Z","lastTransitionTime":"2025-10-06T11:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.991412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.991485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.991503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.991538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:44 crc kubenswrapper[4698]: I1006 11:46:44.991560 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:44Z","lastTransitionTime":"2025-10-06T11:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.095688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.096281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.096478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.096662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.096858 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:45Z","lastTransitionTime":"2025-10-06T11:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.200428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.200849 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.201093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.201284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.201431 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:45Z","lastTransitionTime":"2025-10-06T11:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.304440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.304733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.304905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.305095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.305290 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:45Z","lastTransitionTime":"2025-10-06T11:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.408910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.409179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.409406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.409619 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.409810 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:45Z","lastTransitionTime":"2025-10-06T11:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.513957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.514507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.514674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.514826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.514996 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:45Z","lastTransitionTime":"2025-10-06T11:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.619238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.619700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.619899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.620196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.620485 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:45Z","lastTransitionTime":"2025-10-06T11:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.724703 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.724777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.724795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.724829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.724849 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:45Z","lastTransitionTime":"2025-10-06T11:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.829373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.829446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.829468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.829498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.829517 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:45Z","lastTransitionTime":"2025-10-06T11:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.933391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.933474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.933501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.933535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:45 crc kubenswrapper[4698]: I1006 11:46:45.933559 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:45Z","lastTransitionTime":"2025-10-06T11:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.037373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.037439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.037456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.037480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.037501 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:46Z","lastTransitionTime":"2025-10-06T11:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.141154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.141220 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.141240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.141274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.141295 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:46Z","lastTransitionTime":"2025-10-06T11:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.244189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.244243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.244256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.244277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.244293 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:46Z","lastTransitionTime":"2025-10-06T11:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.328816 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.328862 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:46 crc kubenswrapper[4698]: E1006 11:46:46.328963 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.328874 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:46 crc kubenswrapper[4698]: E1006 11:46:46.329109 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.329147 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:46 crc kubenswrapper[4698]: E1006 11:46:46.329339 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:46 crc kubenswrapper[4698]: E1006 11:46:46.329434 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.347371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.347400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.347410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.347426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.347438 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:46Z","lastTransitionTime":"2025-10-06T11:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.450969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.451031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.451041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.451056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.451066 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:46Z","lastTransitionTime":"2025-10-06T11:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.554117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.554195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.554218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.554251 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.554272 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:46Z","lastTransitionTime":"2025-10-06T11:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.659550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.659633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.659657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.659691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.659712 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:46Z","lastTransitionTime":"2025-10-06T11:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.765570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.765646 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.765665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.765693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.765717 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:46Z","lastTransitionTime":"2025-10-06T11:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.869574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.869659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.869679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.869710 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.869733 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:46Z","lastTransitionTime":"2025-10-06T11:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.973827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.973921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.973945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.973981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:46 crc kubenswrapper[4698]: I1006 11:46:46.974054 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:46Z","lastTransitionTime":"2025-10-06T11:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.078077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.078164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.078191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.078228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.078259 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:47Z","lastTransitionTime":"2025-10-06T11:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.182747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.182814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.182832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.182857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.182883 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:47Z","lastTransitionTime":"2025-10-06T11:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.287216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.287298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.287316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.287386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.287406 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:47Z","lastTransitionTime":"2025-10-06T11:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.391793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.391865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.391883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.391912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.391934 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:47Z","lastTransitionTime":"2025-10-06T11:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.495806 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.495889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.495916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.495954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.495986 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:47Z","lastTransitionTime":"2025-10-06T11:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.599293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.599341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.599353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.599377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.599391 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:47Z","lastTransitionTime":"2025-10-06T11:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.702672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.702758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.702778 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.702809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.702835 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:47Z","lastTransitionTime":"2025-10-06T11:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.806686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.806743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.806761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.806896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.806955 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:47Z","lastTransitionTime":"2025-10-06T11:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.909855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.909963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.909987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.910049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:47 crc kubenswrapper[4698]: I1006 11:46:47.910073 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:47Z","lastTransitionTime":"2025-10-06T11:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.013089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.013146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.013165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.013192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.013210 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:48Z","lastTransitionTime":"2025-10-06T11:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.116783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.116836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.116854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.116881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.116899 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:48Z","lastTransitionTime":"2025-10-06T11:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.220584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.220673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.220692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.220722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.220748 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:48Z","lastTransitionTime":"2025-10-06T11:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.324118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.324257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.324278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.324342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.324364 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:48Z","lastTransitionTime":"2025-10-06T11:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.328284 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.328392 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.328403 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:48 crc kubenswrapper[4698]: E1006 11:46:48.328822 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.328422 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:48 crc kubenswrapper[4698]: E1006 11:46:48.328993 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:48 crc kubenswrapper[4698]: E1006 11:46:48.329115 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:48 crc kubenswrapper[4698]: E1006 11:46:48.329371 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.427675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.427986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.428190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.428343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.428485 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:48Z","lastTransitionTime":"2025-10-06T11:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.531321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.531429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.531450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.531477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.531501 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:48Z","lastTransitionTime":"2025-10-06T11:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.634688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.634750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.634766 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.634792 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.634810 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:48Z","lastTransitionTime":"2025-10-06T11:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.738090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.738162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.738183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.738211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.738235 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:48Z","lastTransitionTime":"2025-10-06T11:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.841147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.841223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.841238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.841264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.841282 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:48Z","lastTransitionTime":"2025-10-06T11:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.945810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.945878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.945890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.945918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:48 crc kubenswrapper[4698]: I1006 11:46:48.945931 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:48Z","lastTransitionTime":"2025-10-06T11:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.048963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.049034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.049047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.049068 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.049082 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:49Z","lastTransitionTime":"2025-10-06T11:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.152532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.152615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.152640 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.152670 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.152727 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:49Z","lastTransitionTime":"2025-10-06T11:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.256979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.257074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.257092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.257115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.257134 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:49Z","lastTransitionTime":"2025-10-06T11:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.360184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.360243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.360283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.360307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.360321 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:49Z","lastTransitionTime":"2025-10-06T11:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.463552 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.463630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.463651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.463679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.463701 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:49Z","lastTransitionTime":"2025-10-06T11:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.567143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.567207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.567225 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.567249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.567266 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:49Z","lastTransitionTime":"2025-10-06T11:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.670127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.670173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.670186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.670213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.670227 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:49Z","lastTransitionTime":"2025-10-06T11:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.773606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.773699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.773720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.773807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.773835 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:49Z","lastTransitionTime":"2025-10-06T11:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.877469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.877549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.877569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.877599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.877620 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:49Z","lastTransitionTime":"2025-10-06T11:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.981222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.981282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.981298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.981326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:49 crc kubenswrapper[4698]: I1006 11:46:49.981344 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:49Z","lastTransitionTime":"2025-10-06T11:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.084704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.084789 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.084816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.084853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.084880 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:50Z","lastTransitionTime":"2025-10-06T11:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.188726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.188812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.188837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.188875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.188900 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:50Z","lastTransitionTime":"2025-10-06T11:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.292950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.293073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.293100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.293133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.293154 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:50Z","lastTransitionTime":"2025-10-06T11:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.327889 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.327947 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.328064 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.327890 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:50 crc kubenswrapper[4698]: E1006 11:46:50.328141 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:50 crc kubenswrapper[4698]: E1006 11:46:50.328347 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:50 crc kubenswrapper[4698]: E1006 11:46:50.328931 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:50 crc kubenswrapper[4698]: E1006 11:46:50.329001 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.396686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.396771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.396797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.396831 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.396858 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:50Z","lastTransitionTime":"2025-10-06T11:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.499986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.500112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.500131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.500160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.500178 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:50Z","lastTransitionTime":"2025-10-06T11:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.603782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.603843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.603859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.603884 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.603900 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:50Z","lastTransitionTime":"2025-10-06T11:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.707768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.707851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.707873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.707905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.707930 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:50Z","lastTransitionTime":"2025-10-06T11:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.811791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.811918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.811937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.811964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.811982 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:50Z","lastTransitionTime":"2025-10-06T11:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.915290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.915360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.915373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.915396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:50 crc kubenswrapper[4698]: I1006 11:46:50.915415 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:50Z","lastTransitionTime":"2025-10-06T11:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.019166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.019249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.019267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.019308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.019325 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:51Z","lastTransitionTime":"2025-10-06T11:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.123179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.123255 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.123274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.123305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.123325 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:51Z","lastTransitionTime":"2025-10-06T11:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.227572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.227630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.227647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.227672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.227718 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:51Z","lastTransitionTime":"2025-10-06T11:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.330700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.330774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.330797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.330825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.330845 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:51Z","lastTransitionTime":"2025-10-06T11:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.435046 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.435095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.435112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.435207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.435230 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:51Z","lastTransitionTime":"2025-10-06T11:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.540359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.540486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.540506 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.540536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.540556 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:51Z","lastTransitionTime":"2025-10-06T11:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.644086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.644143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.644160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.644189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.644207 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:51Z","lastTransitionTime":"2025-10-06T11:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.747757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.747830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.747852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.747883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.747907 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:51Z","lastTransitionTime":"2025-10-06T11:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.851138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.851208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.851226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.851252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.851269 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:51Z","lastTransitionTime":"2025-10-06T11:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.954835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.954949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.954970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.954996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:51 crc kubenswrapper[4698]: I1006 11:46:51.955040 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:51Z","lastTransitionTime":"2025-10-06T11:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.057688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.057759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.057782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.057812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.057835 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.075246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.075326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.075344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.075366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.075384 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: E1006 11:46:52.097686 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.108791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.108875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.108897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.108931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.108954 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: E1006 11:46:52.142840 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.156512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.156609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.156633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.156670 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.156695 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: E1006 11:46:52.190092 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.194458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.194520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.194536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.194564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.194580 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: E1006 11:46:52.211842 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.216663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.216727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.216746 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.216772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.216788 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: E1006 11:46:52.232280 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0861d471-78ee-41c9-b36d-d10e0af16681\\\",\\\"systemUUID\\\":\\\"fa4de2a4-9ac6-4340-beb9-b5a9d6c5030f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:52Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:52 crc kubenswrapper[4698]: E1006 11:46:52.232446 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.234816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.234852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.234864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.234884 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.234899 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.328931 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.328969 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.329144 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:52 crc kubenswrapper[4698]: E1006 11:46:52.329395 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.329424 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:52 crc kubenswrapper[4698]: E1006 11:46:52.329599 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:52 crc kubenswrapper[4698]: E1006 11:46:52.329685 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:52 crc kubenswrapper[4698]: E1006 11:46:52.329792 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.338277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.338349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.338370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.338396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.338417 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.441987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.442096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.442123 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.442165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.442198 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.545621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.545717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.545745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.545786 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.545812 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.649337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.649409 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.649434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.649464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.649486 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.753537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.753638 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.753666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.753706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.753731 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.858009 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.858103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.858125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.858159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.858182 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.962078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.962158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.962183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.962216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:52 crc kubenswrapper[4698]: I1006 11:46:52.962236 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:52Z","lastTransitionTime":"2025-10-06T11:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.055286 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:53 crc kubenswrapper[4698]: E1006 11:46:53.055623 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:46:53 crc kubenswrapper[4698]: E1006 11:46:53.055768 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs podName:13806999-a8a3-4c95-b41e-6def8c208f4b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:57.055731843 +0000 UTC m=+164.468424046 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs") pod "network-metrics-daemon-v8wrg" (UID: "13806999-a8a3-4c95-b41e-6def8c208f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.066191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.066317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.066341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.066373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.066398 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:53Z","lastTransitionTime":"2025-10-06T11:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.169949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.170054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.170075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.170104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.170122 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:53Z","lastTransitionTime":"2025-10-06T11:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.272826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.272885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.272897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.272913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.272926 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:53Z","lastTransitionTime":"2025-10-06T11:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.351902 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.375770 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97851ca62002bd0d1ad62d1318b8dd2142b251ca8f3f959c1cc41c5e9a91cddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.378441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.378520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.378538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.378566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.378587 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:53Z","lastTransitionTime":"2025-10-06T11:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.393832 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tqfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3afedf6c-a96a-4c64-b3b7-411361950f7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d301b11e4204c94548384531c4314762f813f8ad65aa5b05d199774f45c6079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btwf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tqfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.411364 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd7da2a-5593-4582-bd6f-696e84df57ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3c92b853936feb3512efe9fd7d07aacc3495f7d64d4d9f6a73a5317b3613440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d272e8233fc2a8cfb09be447adcaa1dcef994d8d25f094b839f02934a6b01989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d272e8233fc2a8cfb09be447adcaa1dcef994d8d25f094b839f02934a6b01989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.441409 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"124c8f85-4b75-4391-b76a-1eb5fa18d469\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6135236ee2fee1b2b92710205d22c8cda26216d9bab940b00423a0a0c97fcbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1bcacd15a0da428175025576271bff71e966a9193b9da8b0579b23f6532c1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534d10be5aafad024d72544a0cc013d567800552fd6d785fba5d57c58f5e9e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5761bb08077b2e6921f7b3c771e094ed602517aed17e81a6bd9eb66e53520d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a445aebec03f6899c4f12e45c2cf70de77a90139234712630fce4a2ad1101e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T11:45:33Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1006 11:45:26.992137 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 11:45:26.996111 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-675921894/tls.crt::/tmp/serving-cert-675921894/tls.key\\\\\\\"\\\\nI1006 11:45:33.233244 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1006 11:45:33.238819 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1006 11:45:33.239074 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1006 11:45:33.239100 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1006 11:45:33.239108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1006 11:45:33.253551 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1006 11:45:33.253579 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1006 11:45:33.253589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1006 11:45:33.253592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 11:45:33.253596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 11:45:33.253599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 11:45:33.253789 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 11:45:33.257452 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696120b7ea62597ad8c301ecbd33c96251e1b58c64eac8ef69629e0ebe97c573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2052ea6bab0c3e192695cb7218155c843be647490e66acccd13d8be094dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.462009 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.481846 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e3a360a26edeb546295e834543b38d31cc36bf10a2241564b0a174f6ddb4c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.482175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.482230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.482257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.482295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.482321 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:53Z","lastTransitionTime":"2025-10-06T11:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.500181 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://396a9f675f730fa3f5f3b6215b5b1dee2c74a831827f2491285c261f4ec16679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mj8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.518324 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11609fb5-c3f2-4613-bee1-57ad7ff82cee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69cfb79fed3909927063b27d1eb25c18e39b093e674bf7e54ca40e21aa29746d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9bd3cc7840d41f34ff25cee328f040a6d4becb6221b4cfce5aad9c33177fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwdmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xxgwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.553934 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad4fc934-81fd-4bd6-b583-696deb45eedc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d50c8f9158c50e087ab52bb06e94a7ac902a8326370082f214787523e2d027f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d69ffdf4cd6cf910b323437576cbefc55d1d00627d8544973501582336b3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d205efb6a32fa8c608ba911d3e30cf9945d94153a05714eb4a5b65aaeb23b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526c1728440f81091ede7ae8178c08dc435d7e93c287a6fe5e9eace3d9de9f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d224078562eedeffa006c7817d322998d1361c63b17f7403c2011d701451a016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fefab1703b228f7a1100281a863240c6a187e741d4038fb84c0435e12a533453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0ccda105346daa0d955bf8938060bbd8e151e7a3ac0b0206b4cbfaca8b3218e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289610cb5cb3a8d52a77a6b7fa7cfec29d2a74d959dd426b00225deba35ad28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.570972 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eb27734-83b5-49b3-ab35-3ff7ee5dfcd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f25e709c4590567e614de870dd2404307573fdae64eceb7729ac7388cbc78e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://004e7896b8033f8667422ca008cac61261d59b43cc0997840cf7102432e14c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff433ba70af6cabb3fab8ad94eb500455d96cdf4b4cbc7eb1e122fa99f3c0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85018804cdbe2ab650d84a43c9d7c930693c3c6859bcc5dedd13c2e728f6d99d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.585899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.585971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.585993 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.586057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.586085 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:53Z","lastTransitionTime":"2025-10-06T11:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.587551 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.607744 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa1df6932691af70770910302cd0a3d1a64d08296b32c9364b96419fe9b70b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f652e2d7a11f7a250b184fca6227f562e8838fd85a103bbfe14a72ca20799dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.628327 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4f8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e581ae92-9ea3-40a6-abd4-09eb81bb5be4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:21Z\\\",\\\"message\\\":\\\"2025-10-06T11:45:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce\\\\n2025-10-06T11:45:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_15feb0cd-a154-4ed6-a985-efbf49ded7ce to /host/opt/cni/bin/\\\\n2025-10-06T11:45:36Z [verbose] multus-daemon started\\\\n2025-10-06T11:45:36Z [verbose] Readiness Indicator file check\\\\n2025-10-06T11:46:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hw8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4f8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.658507 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c16ee453-14bb-4f57-addd-3fc27cb739de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T11:46:27Z\\\",\\\"message\\\":\\\"ssful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sz4ws after 0 failed attempt(s)\\\\nI1006 11:46:27.407911 6606 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sz4ws\\\\nI1006 11:46:27.407909 6606 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nI1006 11:46:27.407927 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nI1006 11:46:27.407944 6606 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc in node crc\\\\nI1006 11:46:27.407956 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc after 0 failed attempt(s)\\\\nI1006 11:46:27.407964 6606 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc\\\\nF1006 11:46:27.407961 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T11:46:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtv5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sz4ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.676168 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13806999-a8a3-4c95-b41e-6def8c208f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhsx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v8wrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.688934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.689326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.689603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.689767 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.689916 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:53Z","lastTransitionTime":"2025-10-06T11:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.698366 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c8c29de-6f7b-40dc-b29d-c5f94f53b24b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8911f6cbd4ccda8622ed15ea8d859c55b8a85bf02d74d4cfda2e97d7ab8a8fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a637047dcc73a6056be2fea62a1af9671259b2a8435ff55b69236c429a1626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082d71a90655a860384aa640361f60e694b030b62ab829025bee2e672e9cbf38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccfccb92be6980781f791176ba25a5fab9d3bd9cc80cafb2cef37cf18a85112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.714724 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x762x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50439b92-052f-4198-bff0-e5d256bf46b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55d8d41618cac2164bd4caaf262fd12e60707a9c332a856f640e48b2395a8aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r458j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x762x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.739159 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d89609a5-c527-41c2-a78b-e3dbc6ce8819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T11:45:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719e090210c5aa8593adc90b8e46efd13bbf27f89573b395be1382860df82cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T11:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52006f25e59c2ae83c23b61c052a9f8d358a64b5081d4b4f0f332956887bf250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://009d5182849f330feab8a9e7673460ef2cf7011fce3d2fd7a4e6b36e89ebb43d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f525f8d7d5df8c1baae2b14f51e00326df7e4cf2bebd3fa577ff37da97e0e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349ee3c21a2e4584f76bf09f73c96ad4af6dfd41b548ad8c2a562204584eeb61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e057c9eae8e6d25ab7482c64b8d6e423fc6bdbb52706732c0d8ee09efdf6307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bd0989c8cd94083830f67c36e0a1b79f88e94c36eea4e6d8406803b8562241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T11:45:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T11:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl67j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T11:45:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxgjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T11:46:53Z is after 2025-08-24T17:21:41Z" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.794337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.794425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.794445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.794503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.794521 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:53Z","lastTransitionTime":"2025-10-06T11:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.897459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.897730 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.898165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.898401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:53 crc kubenswrapper[4698]: I1006 11:46:53.898600 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:53Z","lastTransitionTime":"2025-10-06T11:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.001745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.001851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.001885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.001922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.001943 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:54Z","lastTransitionTime":"2025-10-06T11:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.105276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.105628 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.105774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.105913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.106099 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:54Z","lastTransitionTime":"2025-10-06T11:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.211388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.211714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.211856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.212010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.212209 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:54Z","lastTransitionTime":"2025-10-06T11:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.316616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.316866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.317060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.317224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.317512 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:54Z","lastTransitionTime":"2025-10-06T11:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.328490 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.328681 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.328622 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.328695 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:54 crc kubenswrapper[4698]: E1006 11:46:54.329127 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:54 crc kubenswrapper[4698]: E1006 11:46:54.329270 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:54 crc kubenswrapper[4698]: E1006 11:46:54.329475 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:54 crc kubenswrapper[4698]: E1006 11:46:54.329632 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.421008 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.421184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.421214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.421251 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.421279 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:54Z","lastTransitionTime":"2025-10-06T11:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.524806 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.524914 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.524934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.524969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.525001 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:54Z","lastTransitionTime":"2025-10-06T11:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.627754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.627826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.627845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.627874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.627896 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:54Z","lastTransitionTime":"2025-10-06T11:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.731445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.731516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.731539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.731566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.731587 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:54Z","lastTransitionTime":"2025-10-06T11:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.835521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.836050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.836121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.836199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.836268 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:54Z","lastTransitionTime":"2025-10-06T11:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.939259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.939619 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.939687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.939772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:54 crc kubenswrapper[4698]: I1006 11:46:54.939853 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:54Z","lastTransitionTime":"2025-10-06T11:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.044336 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.044435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.044460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.044498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.044522 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:55Z","lastTransitionTime":"2025-10-06T11:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.148640 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.148745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.148777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.148816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.148835 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:55Z","lastTransitionTime":"2025-10-06T11:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.252914 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.252984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.253002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.253065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.253086 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:55Z","lastTransitionTime":"2025-10-06T11:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.329656 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:46:55 crc kubenswrapper[4698]: E1006 11:46:55.330072 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.356962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.357352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.357574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.357822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.358007 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:55Z","lastTransitionTime":"2025-10-06T11:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.460860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.460936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.460959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.460993 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.461052 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:55Z","lastTransitionTime":"2025-10-06T11:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.564295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.564381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.564394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.564447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.564463 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:55Z","lastTransitionTime":"2025-10-06T11:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.669748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.669824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.669842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.669882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.669907 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:55Z","lastTransitionTime":"2025-10-06T11:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.777987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.778971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.779204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.779427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.779640 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:55Z","lastTransitionTime":"2025-10-06T11:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.883317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.884128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.884392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.884633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.884853 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:55Z","lastTransitionTime":"2025-10-06T11:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.989265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.989316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.989330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.989348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:55 crc kubenswrapper[4698]: I1006 11:46:55.989364 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:55Z","lastTransitionTime":"2025-10-06T11:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.092819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.092896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.092923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.092959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.093000 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:56Z","lastTransitionTime":"2025-10-06T11:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.196592 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.196666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.196689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.196718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.196739 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:56Z","lastTransitionTime":"2025-10-06T11:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.300200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.300273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.300290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.300321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.300348 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:56Z","lastTransitionTime":"2025-10-06T11:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.327813 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.327905 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.327967 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:56 crc kubenswrapper[4698]: E1006 11:46:56.328172 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.328223 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:56 crc kubenswrapper[4698]: E1006 11:46:56.328392 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:56 crc kubenswrapper[4698]: E1006 11:46:56.328646 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:56 crc kubenswrapper[4698]: E1006 11:46:56.328768 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.403191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.403342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.403365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.403394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.403418 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:56Z","lastTransitionTime":"2025-10-06T11:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.507169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.507229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.507246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.507274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.507293 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:56Z","lastTransitionTime":"2025-10-06T11:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.611114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.611190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.611208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.611236 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.611258 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:56Z","lastTransitionTime":"2025-10-06T11:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.715104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.715174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.715192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.715219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.715238 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:56Z","lastTransitionTime":"2025-10-06T11:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.818636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.818688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.818697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.818714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.818724 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:56Z","lastTransitionTime":"2025-10-06T11:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.921887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.921928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.921938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.921957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:56 crc kubenswrapper[4698]: I1006 11:46:56.921968 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:56Z","lastTransitionTime":"2025-10-06T11:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.025212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.025292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.025321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.025362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.025384 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:57Z","lastTransitionTime":"2025-10-06T11:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.128443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.128494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.128507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.128526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.128540 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:57Z","lastTransitionTime":"2025-10-06T11:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.232290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.232377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.232401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.232438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.232462 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:57Z","lastTransitionTime":"2025-10-06T11:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.335127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.335209 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.335232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.335270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.335294 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:57Z","lastTransitionTime":"2025-10-06T11:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.439253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.439320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.439340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.439367 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.439391 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:57Z","lastTransitionTime":"2025-10-06T11:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.542167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.542264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.542282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.542315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.542340 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:57Z","lastTransitionTime":"2025-10-06T11:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.645661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.645717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.645737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.645763 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.645783 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:57Z","lastTransitionTime":"2025-10-06T11:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.750262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.750351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.750376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.750423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.750453 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:57Z","lastTransitionTime":"2025-10-06T11:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.856327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.856419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.856442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.856474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.856496 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:57Z","lastTransitionTime":"2025-10-06T11:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.960296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.960412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.960432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.960461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:57 crc kubenswrapper[4698]: I1006 11:46:57.960481 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:57Z","lastTransitionTime":"2025-10-06T11:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.063059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.063132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.063152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.063179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.063198 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:58Z","lastTransitionTime":"2025-10-06T11:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.166559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.166621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.166639 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.166666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.166685 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:58Z","lastTransitionTime":"2025-10-06T11:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.269974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.270100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.270118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.270146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.270168 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:58Z","lastTransitionTime":"2025-10-06T11:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.328051 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.328051 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:46:58 crc kubenswrapper[4698]: E1006 11:46:58.328235 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.328362 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.328425 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:46:58 crc kubenswrapper[4698]: E1006 11:46:58.328593 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:46:58 crc kubenswrapper[4698]: E1006 11:46:58.328694 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:46:58 crc kubenswrapper[4698]: E1006 11:46:58.328865 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.374536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.374622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.374642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.374672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.374694 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:58Z","lastTransitionTime":"2025-10-06T11:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.478038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.478095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.478112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.478136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.478154 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:58Z","lastTransitionTime":"2025-10-06T11:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.581995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.582085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.582104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.582126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.582143 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:58Z","lastTransitionTime":"2025-10-06T11:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.685632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.685717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.685741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.685771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.685791 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:58Z","lastTransitionTime":"2025-10-06T11:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.789099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.789169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.789188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.789221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.789241 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:58Z","lastTransitionTime":"2025-10-06T11:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.892661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.892748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.892771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.892800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.892820 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:58Z","lastTransitionTime":"2025-10-06T11:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.997382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.997456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.997484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.997521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:58 crc kubenswrapper[4698]: I1006 11:46:58.997554 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:58Z","lastTransitionTime":"2025-10-06T11:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.101507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.101577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.101597 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.101621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.101645 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:59Z","lastTransitionTime":"2025-10-06T11:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.204623 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.204693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.204712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.204743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.204763 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:59Z","lastTransitionTime":"2025-10-06T11:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.307949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.308068 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.308088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.308118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.308137 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:59Z","lastTransitionTime":"2025-10-06T11:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.411182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.411227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.411241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.411259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.411273 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:59Z","lastTransitionTime":"2025-10-06T11:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.514221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.514301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.514321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.514349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.514365 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:59Z","lastTransitionTime":"2025-10-06T11:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.616927 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.617402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.617722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.617910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.618121 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:59Z","lastTransitionTime":"2025-10-06T11:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.721721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.721799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.721818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.721851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.721872 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:59Z","lastTransitionTime":"2025-10-06T11:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.825888 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.826324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.826397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.826465 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.826527 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:59Z","lastTransitionTime":"2025-10-06T11:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.930118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.930553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.930705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.930845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:46:59 crc kubenswrapper[4698]: I1006 11:46:59.930975 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:46:59Z","lastTransitionTime":"2025-10-06T11:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.034415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.034493 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.034519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.034549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.034574 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:00Z","lastTransitionTime":"2025-10-06T11:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.137887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.137935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.137946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.137971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.137985 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:00Z","lastTransitionTime":"2025-10-06T11:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.240621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.240686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.240699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.240730 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.240747 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:00Z","lastTransitionTime":"2025-10-06T11:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.328274 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.328341 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:00 crc kubenswrapper[4698]: E1006 11:47:00.328449 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.328274 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.328528 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:00 crc kubenswrapper[4698]: E1006 11:47:00.328667 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:00 crc kubenswrapper[4698]: E1006 11:47:00.328731 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:00 crc kubenswrapper[4698]: E1006 11:47:00.328789 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.343849 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.343907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.343920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.343956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.343974 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:00Z","lastTransitionTime":"2025-10-06T11:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.448062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.448150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.448174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.448209 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.448236 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:00Z","lastTransitionTime":"2025-10-06T11:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.551332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.551400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.551425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.551463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.551487 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:00Z","lastTransitionTime":"2025-10-06T11:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.654611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.654649 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.654657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.654673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.654682 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:00Z","lastTransitionTime":"2025-10-06T11:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.762585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.762655 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.762676 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.762704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.762738 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:00Z","lastTransitionTime":"2025-10-06T11:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.866483 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.866561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.866575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.866600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.866619 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:00Z","lastTransitionTime":"2025-10-06T11:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.969493 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.969542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.969553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.969570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:00 crc kubenswrapper[4698]: I1006 11:47:00.969582 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:00Z","lastTransitionTime":"2025-10-06T11:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.072826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.073498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.073692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.073915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.074171 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:01Z","lastTransitionTime":"2025-10-06T11:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.178184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.178562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.178761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.178926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.179277 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:01Z","lastTransitionTime":"2025-10-06T11:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.282647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.282766 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.282782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.282799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.282821 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:01Z","lastTransitionTime":"2025-10-06T11:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.387598 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.387685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.387717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.387753 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.387778 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:01Z","lastTransitionTime":"2025-10-06T11:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.490548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.490625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.490651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.490682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.490706 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:01Z","lastTransitionTime":"2025-10-06T11:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.594199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.594276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.594299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.594334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.594357 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:01Z","lastTransitionTime":"2025-10-06T11:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.697528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.697582 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.697600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.697625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.697646 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:01Z","lastTransitionTime":"2025-10-06T11:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.801875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.801941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.801959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.801984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.802001 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:01Z","lastTransitionTime":"2025-10-06T11:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.905704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.905767 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.905785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.905814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:01 crc kubenswrapper[4698]: I1006 11:47:01.905836 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:01Z","lastTransitionTime":"2025-10-06T11:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.009317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.009397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.009415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.009444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.009466 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:02Z","lastTransitionTime":"2025-10-06T11:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.113510 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.113995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.114214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.114356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.114487 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:02Z","lastTransitionTime":"2025-10-06T11:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.217960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.218082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.218105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.218553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.218587 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:02Z","lastTransitionTime":"2025-10-06T11:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.321500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.322169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.322205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.322227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.322243 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:02Z","lastTransitionTime":"2025-10-06T11:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.328718 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.328777 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:02 crc kubenswrapper[4698]: E1006 11:47:02.328828 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:02 crc kubenswrapper[4698]: E1006 11:47:02.328893 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.328721 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.329275 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:02 crc kubenswrapper[4698]: E1006 11:47:02.329334 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:02 crc kubenswrapper[4698]: E1006 11:47:02.329463 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.425439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.425491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.425501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.425522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.425534 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:02Z","lastTransitionTime":"2025-10-06T11:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.528574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.528617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.528630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.528649 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.528667 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:02Z","lastTransitionTime":"2025-10-06T11:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.633465 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.633556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.633582 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.633619 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.633645 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:02Z","lastTransitionTime":"2025-10-06T11:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.635960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.636046 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.636066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.636087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.636103 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T11:47:02Z","lastTransitionTime":"2025-10-06T11:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.708607 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv"] Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.709139 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.714321 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.714355 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.714741 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.715119 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.750664 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4f8bs" podStartSLOduration=89.750628663 podStartE2EDuration="1m29.750628663s" podCreationTimestamp="2025-10-06 11:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:02.748448095 +0000 UTC m=+110.161140278" watchObservedRunningTime="2025-10-06 11:47:02.750628663 +0000 UTC m=+110.163320866" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.782439 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8968166c-a0f9-430b-8247-c7eeb586afd0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.782522 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8968166c-a0f9-430b-8247-c7eeb586afd0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.782557 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8968166c-a0f9-430b-8247-c7eeb586afd0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.782605 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8968166c-a0f9-430b-8247-c7eeb586afd0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.782628 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8968166c-a0f9-430b-8247-c7eeb586afd0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.782413 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-x762x" podStartSLOduration=89.7823843 podStartE2EDuration="1m29.7823843s" podCreationTimestamp="2025-10-06 11:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:02.781106406 +0000 UTC m=+110.193798569" watchObservedRunningTime="2025-10-06 11:47:02.7823843 +0000 UTC m=+110.195076723" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.782837 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.782821721 podStartE2EDuration="1m28.782821721s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:02.767097432 +0000 UTC m=+110.179789595" watchObservedRunningTime="2025-10-06 11:47:02.782821721 +0000 UTC m=+110.195513894" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.807864 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dxgjr" podStartSLOduration=89.807840192 podStartE2EDuration="1m29.807840192s" podCreationTimestamp="2025-10-06 11:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:02.806548739 +0000 UTC m=+110.219240942" watchObservedRunningTime="2025-10-06 11:47:02.807840192 +0000 UTC m=+110.220532365" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.883433 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5tqfs" podStartSLOduration=88.88340523 podStartE2EDuration="1m28.88340523s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:02.866831468 +0000 UTC m=+110.279523641" watchObservedRunningTime="2025-10-06 11:47:02.88340523 +0000 UTC m=+110.296097443" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.883782 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8968166c-a0f9-430b-8247-c7eeb586afd0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.883838 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8968166c-a0f9-430b-8247-c7eeb586afd0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.883880 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8968166c-a0f9-430b-8247-c7eeb586afd0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.883947 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8968166c-a0f9-430b-8247-c7eeb586afd0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.884183 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8968166c-a0f9-430b-8247-c7eeb586afd0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.884210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8968166c-a0f9-430b-8247-c7eeb586afd0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.885378 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8968166c-a0f9-430b-8247-c7eeb586afd0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.885470 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8968166c-a0f9-430b-8247-c7eeb586afd0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.893450 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.893433831 podStartE2EDuration="28.893433831s" podCreationTimestamp="2025-10-06 11:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:02.892936218 +0000 UTC m=+110.305628431" watchObservedRunningTime="2025-10-06 11:47:02.893433831 +0000 UTC m=+110.306126034" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.900574 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8968166c-a0f9-430b-8247-c7eeb586afd0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.900696 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8968166c-a0f9-430b-8247-c7eeb586afd0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jjcrv\" (UID: \"8968166c-a0f9-430b-8247-c7eeb586afd0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.927585 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.927548959 podStartE2EDuration="1m29.927548959s" podCreationTimestamp="2025-10-06 11:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:02.911916062 +0000 UTC m=+110.324608235" watchObservedRunningTime="2025-10-06 11:47:02.927548959 +0000 UTC m=+110.340241132" Oct 06 11:47:02 crc kubenswrapper[4698]: I1006 11:47:02.981404 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xxgwc" podStartSLOduration=88.981376601 podStartE2EDuration="1m28.981376601s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:02.979972164 +0000 UTC m=+110.392664367" watchObservedRunningTime="2025-10-06 11:47:02.981376601 +0000 UTC m=+110.394068774" Oct 06 11:47:03 crc kubenswrapper[4698]: I1006 11:47:03.012481 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=87.01245193 podStartE2EDuration="1m27.01245193s" podCreationTimestamp="2025-10-06 11:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:03.011893245 +0000 UTC m=+110.424585418" watchObservedRunningTime="2025-10-06 11:47:03.01245193 +0000 UTC m=+110.425144113" Oct 06 11:47:03 crc kubenswrapper[4698]: I1006 11:47:03.027519 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.027495522 podStartE2EDuration="56.027495522s" podCreationTimestamp="2025-10-06 11:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:03.026784483 +0000 UTC m=+110.439476666" watchObservedRunningTime="2025-10-06 11:47:03.027495522 +0000 UTC m=+110.440187705" Oct 06 11:47:03 crc kubenswrapper[4698]: I1006 11:47:03.031144 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" Oct 06 11:47:03 crc kubenswrapper[4698]: I1006 11:47:03.080074 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" event={"ID":"8968166c-a0f9-430b-8247-c7eeb586afd0","Type":"ContainerStarted","Data":"bb63178fdc30ed4074869e9e32b6fe9f9232d4cffadb0f40613f4ebd36f4a3a9"} Oct 06 11:47:03 crc kubenswrapper[4698]: I1006 11:47:03.082772 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podStartSLOduration=90.08275762 podStartE2EDuration="1m30.08275762s" podCreationTimestamp="2025-10-06 11:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:03.082183985 +0000 UTC m=+110.494876168" watchObservedRunningTime="2025-10-06 11:47:03.08275762 +0000 UTC m=+110.495449793" Oct 06 11:47:04 crc kubenswrapper[4698]: I1006 11:47:04.087486 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" event={"ID":"8968166c-a0f9-430b-8247-c7eeb586afd0","Type":"ContainerStarted","Data":"a63629093f5da43277f23047ac374c40990c219e991fcedb77d3930aac154e75"} Oct 06 11:47:04 crc kubenswrapper[4698]: I1006 11:47:04.111040 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jjcrv" podStartSLOduration=90.110990122 podStartE2EDuration="1m30.110990122s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:04.108651731 +0000 UTC m=+111.521343944" watchObservedRunningTime="2025-10-06 11:47:04.110990122 +0000 UTC m=+111.523682325" Oct 06 11:47:04 crc kubenswrapper[4698]: I1006 11:47:04.328803 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:04 crc kubenswrapper[4698]: I1006 11:47:04.328885 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:04 crc kubenswrapper[4698]: I1006 11:47:04.329006 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:04 crc kubenswrapper[4698]: E1006 11:47:04.329192 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:04 crc kubenswrapper[4698]: I1006 11:47:04.329253 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:04 crc kubenswrapper[4698]: E1006 11:47:04.329533 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:04 crc kubenswrapper[4698]: E1006 11:47:04.329663 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:04 crc kubenswrapper[4698]: E1006 11:47:04.329906 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:06 crc kubenswrapper[4698]: I1006 11:47:06.328398 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:06 crc kubenswrapper[4698]: I1006 11:47:06.328497 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:06 crc kubenswrapper[4698]: I1006 11:47:06.328435 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:06 crc kubenswrapper[4698]: I1006 11:47:06.328428 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:06 crc kubenswrapper[4698]: E1006 11:47:06.328680 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:06 crc kubenswrapper[4698]: E1006 11:47:06.328827 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:06 crc kubenswrapper[4698]: E1006 11:47:06.328928 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:06 crc kubenswrapper[4698]: E1006 11:47:06.329167 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:07 crc kubenswrapper[4698]: I1006 11:47:07.330267 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:47:07 crc kubenswrapper[4698]: E1006 11:47:07.330588 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sz4ws_openshift-ovn-kubernetes(c16ee453-14bb-4f57-addd-3fc27cb739de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" Oct 06 11:47:08 crc kubenswrapper[4698]: I1006 11:47:08.328852 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:08 crc kubenswrapper[4698]: I1006 11:47:08.328962 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:08 crc kubenswrapper[4698]: I1006 11:47:08.329050 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:08 crc kubenswrapper[4698]: E1006 11:47:08.329256 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:08 crc kubenswrapper[4698]: E1006 11:47:08.329378 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:08 crc kubenswrapper[4698]: E1006 11:47:08.329852 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:08 crc kubenswrapper[4698]: I1006 11:47:08.328898 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:08 crc kubenswrapper[4698]: E1006 11:47:08.330087 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:09 crc kubenswrapper[4698]: I1006 11:47:09.112661 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f8bs_e581ae92-9ea3-40a6-abd4-09eb81bb5be4/kube-multus/1.log" Oct 06 11:47:09 crc kubenswrapper[4698]: I1006 11:47:09.113348 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f8bs_e581ae92-9ea3-40a6-abd4-09eb81bb5be4/kube-multus/0.log" Oct 06 11:47:09 crc kubenswrapper[4698]: I1006 11:47:09.113406 4698 generic.go:334] "Generic (PLEG): container finished" podID="e581ae92-9ea3-40a6-abd4-09eb81bb5be4" containerID="be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9" exitCode=1 Oct 06 11:47:09 crc kubenswrapper[4698]: I1006 11:47:09.113458 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f8bs" event={"ID":"e581ae92-9ea3-40a6-abd4-09eb81bb5be4","Type":"ContainerDied","Data":"be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9"} Oct 06 11:47:09 crc kubenswrapper[4698]: I1006 11:47:09.113512 4698 scope.go:117] "RemoveContainer" containerID="ff3faeceed3d25e963e38ba86dcded0595d65c86afc2d64f901b707c92157696" Oct 06 11:47:09 crc kubenswrapper[4698]: I1006 11:47:09.114311 4698 scope.go:117] "RemoveContainer" containerID="be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9" Oct 06 11:47:09 crc kubenswrapper[4698]: E1006 11:47:09.114563 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4f8bs_openshift-multus(e581ae92-9ea3-40a6-abd4-09eb81bb5be4)\"" pod="openshift-multus/multus-4f8bs" podUID="e581ae92-9ea3-40a6-abd4-09eb81bb5be4" Oct 06 11:47:10 crc kubenswrapper[4698]: I1006 11:47:10.118707 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f8bs_e581ae92-9ea3-40a6-abd4-09eb81bb5be4/kube-multus/1.log" Oct 06 11:47:10 crc kubenswrapper[4698]: I1006 11:47:10.328954 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:10 crc kubenswrapper[4698]: I1006 11:47:10.329006 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:10 crc kubenswrapper[4698]: I1006 11:47:10.328970 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:10 crc kubenswrapper[4698]: I1006 11:47:10.328956 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:10 crc kubenswrapper[4698]: E1006 11:47:10.329196 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:10 crc kubenswrapper[4698]: E1006 11:47:10.329375 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:10 crc kubenswrapper[4698]: E1006 11:47:10.329541 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:10 crc kubenswrapper[4698]: E1006 11:47:10.329649 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:12 crc kubenswrapper[4698]: I1006 11:47:12.328848 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:12 crc kubenswrapper[4698]: I1006 11:47:12.328937 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:12 crc kubenswrapper[4698]: E1006 11:47:12.328985 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:12 crc kubenswrapper[4698]: I1006 11:47:12.329062 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:12 crc kubenswrapper[4698]: E1006 11:47:12.329180 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:12 crc kubenswrapper[4698]: I1006 11:47:12.328955 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:12 crc kubenswrapper[4698]: E1006 11:47:12.329379 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:12 crc kubenswrapper[4698]: E1006 11:47:12.329496 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:13 crc kubenswrapper[4698]: E1006 11:47:13.347666 4698 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 06 11:47:13 crc kubenswrapper[4698]: E1006 11:47:13.428647 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 06 11:47:14 crc kubenswrapper[4698]: I1006 11:47:14.328260 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:14 crc kubenswrapper[4698]: I1006 11:47:14.328422 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:14 crc kubenswrapper[4698]: E1006 11:47:14.328495 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:14 crc kubenswrapper[4698]: I1006 11:47:14.328422 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:14 crc kubenswrapper[4698]: E1006 11:47:14.328666 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:14 crc kubenswrapper[4698]: E1006 11:47:14.328817 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:14 crc kubenswrapper[4698]: I1006 11:47:14.329231 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:14 crc kubenswrapper[4698]: E1006 11:47:14.329558 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:16 crc kubenswrapper[4698]: I1006 11:47:16.327913 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:16 crc kubenswrapper[4698]: I1006 11:47:16.328176 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:16 crc kubenswrapper[4698]: E1006 11:47:16.328991 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:16 crc kubenswrapper[4698]: I1006 11:47:16.328256 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:16 crc kubenswrapper[4698]: I1006 11:47:16.328148 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:16 crc kubenswrapper[4698]: E1006 11:47:16.329366 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:16 crc kubenswrapper[4698]: E1006 11:47:16.329509 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:16 crc kubenswrapper[4698]: E1006 11:47:16.329723 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:18 crc kubenswrapper[4698]: I1006 11:47:18.328405 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:18 crc kubenswrapper[4698]: I1006 11:47:18.328461 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:18 crc kubenswrapper[4698]: I1006 11:47:18.328520 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:18 crc kubenswrapper[4698]: I1006 11:47:18.328480 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:18 crc kubenswrapper[4698]: E1006 11:47:18.328609 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:18 crc kubenswrapper[4698]: E1006 11:47:18.328789 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:18 crc kubenswrapper[4698]: E1006 11:47:18.328980 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:18 crc kubenswrapper[4698]: E1006 11:47:18.329102 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:18 crc kubenswrapper[4698]: E1006 11:47:18.430605 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 06 11:47:20 crc kubenswrapper[4698]: I1006 11:47:20.328267 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:20 crc kubenswrapper[4698]: I1006 11:47:20.328341 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:20 crc kubenswrapper[4698]: I1006 11:47:20.328391 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:20 crc kubenswrapper[4698]: I1006 11:47:20.328354 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:20 crc kubenswrapper[4698]: E1006 11:47:20.328473 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:20 crc kubenswrapper[4698]: E1006 11:47:20.328629 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:20 crc kubenswrapper[4698]: E1006 11:47:20.328750 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:20 crc kubenswrapper[4698]: E1006 11:47:20.328891 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:21 crc kubenswrapper[4698]: I1006 11:47:21.329938 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:47:22 crc kubenswrapper[4698]: I1006 11:47:22.167077 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/3.log" Oct 06 11:47:22 crc kubenswrapper[4698]: I1006 11:47:22.170103 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerStarted","Data":"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f"} Oct 06 11:47:22 crc kubenswrapper[4698]: I1006 11:47:22.170739 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:47:22 crc kubenswrapper[4698]: I1006 11:47:22.207980 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podStartSLOduration=109.207957795 podStartE2EDuration="1m49.207957795s" podCreationTimestamp="2025-10-06 11:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:22.206427305 +0000 UTC m=+129.619119478" watchObservedRunningTime="2025-10-06 11:47:22.207957795 +0000 UTC m=+129.620649968" Oct 06 11:47:22 crc kubenswrapper[4698]: I1006 11:47:22.327910 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:22 crc kubenswrapper[4698]: I1006 11:47:22.327963 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:22 crc kubenswrapper[4698]: I1006 11:47:22.328025 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:22 crc kubenswrapper[4698]: I1006 11:47:22.328081 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:22 crc kubenswrapper[4698]: E1006 11:47:22.328178 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:22 crc kubenswrapper[4698]: E1006 11:47:22.328386 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:22 crc kubenswrapper[4698]: E1006 11:47:22.328563 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:22 crc kubenswrapper[4698]: E1006 11:47:22.328781 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:22 crc kubenswrapper[4698]: I1006 11:47:22.336694 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v8wrg"] Oct 06 11:47:23 crc kubenswrapper[4698]: I1006 11:47:23.175580 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:23 crc kubenswrapper[4698]: E1006 11:47:23.175794 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:23 crc kubenswrapper[4698]: I1006 11:47:23.331518 4698 scope.go:117] "RemoveContainer" containerID="be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9" Oct 06 11:47:23 crc kubenswrapper[4698]: E1006 11:47:23.436375 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 06 11:47:24 crc kubenswrapper[4698]: I1006 11:47:24.184556 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f8bs_e581ae92-9ea3-40a6-abd4-09eb81bb5be4/kube-multus/1.log" Oct 06 11:47:24 crc kubenswrapper[4698]: I1006 11:47:24.184660 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f8bs" event={"ID":"e581ae92-9ea3-40a6-abd4-09eb81bb5be4","Type":"ContainerStarted","Data":"3f1716b87d8466e0152842788eca9053d0fc39840337230f350c887ce4b4d14c"} Oct 06 11:47:24 crc kubenswrapper[4698]: I1006 11:47:24.329305 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:24 crc kubenswrapper[4698]: I1006 11:47:24.329361 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:24 crc kubenswrapper[4698]: I1006 11:47:24.329417 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:24 crc kubenswrapper[4698]: E1006 11:47:24.329478 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:24 crc kubenswrapper[4698]: E1006 11:47:24.329634 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:24 crc kubenswrapper[4698]: E1006 11:47:24.329949 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:25 crc kubenswrapper[4698]: I1006 11:47:25.328810 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:25 crc kubenswrapper[4698]: E1006 11:47:25.329119 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:26 crc kubenswrapper[4698]: I1006 11:47:26.328663 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:26 crc kubenswrapper[4698]: I1006 11:47:26.328662 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:26 crc kubenswrapper[4698]: I1006 11:47:26.328684 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:26 crc kubenswrapper[4698]: E1006 11:47:26.329661 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:26 crc kubenswrapper[4698]: E1006 11:47:26.329401 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:26 crc kubenswrapper[4698]: E1006 11:47:26.329711 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:27 crc kubenswrapper[4698]: I1006 11:47:27.328731 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:27 crc kubenswrapper[4698]: E1006 11:47:27.329052 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v8wrg" podUID="13806999-a8a3-4c95-b41e-6def8c208f4b" Oct 06 11:47:28 crc kubenswrapper[4698]: I1006 11:47:28.327907 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:28 crc kubenswrapper[4698]: I1006 11:47:28.327958 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:28 crc kubenswrapper[4698]: I1006 11:47:28.328086 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:28 crc kubenswrapper[4698]: E1006 11:47:28.328163 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 11:47:28 crc kubenswrapper[4698]: E1006 11:47:28.328332 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 11:47:28 crc kubenswrapper[4698]: E1006 11:47:28.328416 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 11:47:29 crc kubenswrapper[4698]: I1006 11:47:29.328256 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:29 crc kubenswrapper[4698]: I1006 11:47:29.331607 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 06 11:47:29 crc kubenswrapper[4698]: I1006 11:47:29.331607 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 06 11:47:30 crc kubenswrapper[4698]: I1006 11:47:30.328991 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:30 crc kubenswrapper[4698]: I1006 11:47:30.329198 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:30 crc kubenswrapper[4698]: I1006 11:47:30.329214 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:30 crc kubenswrapper[4698]: I1006 11:47:30.332537 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 06 11:47:30 crc kubenswrapper[4698]: I1006 11:47:30.333088 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 06 11:47:30 crc kubenswrapper[4698]: I1006 11:47:30.333129 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 06 11:47:30 crc kubenswrapper[4698]: I1006 11:47:30.333086 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.031575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.096938 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.098341 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.098912 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.100413 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-dtbvf"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.100495 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.101338 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.102188 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7vqvf"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.103118 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.103797 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dtpxb"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.104374 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.105702 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.106669 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.107073 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wcq9d"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.107580 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.108278 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-t96dq"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.109205 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.111358 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.112217 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.112659 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-m7f55"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.113381 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.113683 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6qn85"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.114513 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6qn85" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.114969 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.133036 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.134942 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.135332 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.135588 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136003 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136156 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136201 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136232 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136274 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136285 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136342 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136427 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136503 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136568 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136615 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136688 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136731 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136811 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136863 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.136982 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137043 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137102 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137110 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137130 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137139 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137215 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137227 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137264 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137372 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137395 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137408 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137512 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137540 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137553 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137515 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137587 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137627 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137667 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137738 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137765 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.137856 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.138053 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.143399 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.146115 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.146221 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.146988 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.147362 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.147804 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.148295 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.150098 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.160974 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.170465 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.170463 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.170620 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.170691 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.171213 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.171371 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.171581 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.171764 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.171785 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.172423 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.173634 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pqn5k"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.172451 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.174726 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.174878 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176536 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-image-import-ca\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176594 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/511f6221-7c79-4345-bc67-677a14b028fb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4plfr\" (UID: \"511f6221-7c79-4345-bc67-677a14b028fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176638 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-client-ca\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176676 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zczq\" (UniqueName: \"kubernetes.io/projected/511f6221-7c79-4345-bc67-677a14b028fb-kube-api-access-8zczq\") pod \"openshift-apiserver-operator-796bbdcf4f-4plfr\" (UID: \"511f6221-7c79-4345-bc67-677a14b028fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176719 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-config\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176738 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a2a3eda-b55b-46a8-8196-de125ec180a3-auth-proxy-config\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176757 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffhw8\" (UniqueName: \"kubernetes.io/projected/8d2dcba3-ec97-4c74-838c-a77d8661cd32-kube-api-access-ffhw8\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176787 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-etcd-client\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176805 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-encryption-config\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176823 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511f6221-7c79-4345-bc67-677a14b028fb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4plfr\" (UID: \"511f6221-7c79-4345-bc67-677a14b028fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176840 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176856 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d2dcba3-ec97-4c74-838c-a77d8661cd32-config\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176872 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-service-ca\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176888 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176914 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-config\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176931 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qhl8\" (UniqueName: \"kubernetes.io/projected/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-kube-api-access-4qhl8\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176951 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176967 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-serving-cert\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.176986 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdk85\" (UniqueName: \"kubernetes.io/projected/651e7770-2e16-4d27-9fd6-30e281eba126-kube-api-access-sdk85\") pod \"openshift-config-operator-7777fb866f-zn2x5\" (UID: \"651e7770-2e16-4d27-9fd6-30e281eba126\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177002 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3a2a3eda-b55b-46a8-8196-de125ec180a3-machine-approver-tls\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177040 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phznf\" (UniqueName: \"kubernetes.io/projected/dc33924c-840f-497c-ad04-657d6fa573a9-kube-api-access-phznf\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177057 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177075 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c4af101-d225-4613-8ab7-82268dc3bc62-config\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177094 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4af101-d225-4613-8ab7-82268dc3bc62-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177111 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-audit\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177130 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-oauth-serving-cert\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177149 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177166 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651e7770-2e16-4d27-9fd6-30e281eba126-serving-cert\") pod \"openshift-config-operator-7777fb866f-zn2x5\" (UID: \"651e7770-2e16-4d27-9fd6-30e281eba126\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177184 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f785f172-fe51-4984-a2c8-fb228244202b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lsnwk\" (UID: \"f785f172-fe51-4984-a2c8-fb228244202b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177211 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177227 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177245 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4af101-d225-4613-8ab7-82268dc3bc62-service-ca-bundle\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177269 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177286 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177301 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d2dcba3-ec97-4c74-838c-a77d8661cd32-serving-cert\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177317 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-serving-cert\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177331 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d2dcba3-ec97-4c74-838c-a77d8661cd32-trusted-ca\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177347 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sk4s\" (UniqueName: \"kubernetes.io/projected/f785f172-fe51-4984-a2c8-fb228244202b-kube-api-access-7sk4s\") pod \"cluster-samples-operator-665b6dd947-lsnwk\" (UID: \"f785f172-fe51-4984-a2c8-fb228244202b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177364 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-oauth-config\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177383 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177402 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpwq5\" (UniqueName: \"kubernetes.io/projected/2603dc30-08f8-4a0c-946f-4d4f971fae56-kube-api-access-bpwq5\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177420 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-console-config\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177437 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-node-pullsecrets\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177456 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/651e7770-2e16-4d27-9fd6-30e281eba126-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zn2x5\" (UID: \"651e7770-2e16-4d27-9fd6-30e281eba126\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177473 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-serving-cert\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177492 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2a3eda-b55b-46a8-8196-de125ec180a3-config\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177508 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxkhh\" (UniqueName: \"kubernetes.io/projected/6a709775-a67f-4f9e-813b-03b0089f0ca5-kube-api-access-gxkhh\") pod \"downloads-7954f5f757-6qn85\" (UID: \"6a709775-a67f-4f9e-813b-03b0089f0ca5\") " pod="openshift-console/downloads-7954f5f757-6qn85" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177526 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m657x\" (UniqueName: \"kubernetes.io/projected/3a2a3eda-b55b-46a8-8196-de125ec180a3-kube-api-access-m657x\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177545 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9g7r\" (UniqueName: \"kubernetes.io/projected/8c4af101-d225-4613-8ab7-82268dc3bc62-kube-api-access-j9g7r\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177563 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-dir\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177580 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-trusted-ca-bundle\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177595 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-etcd-serving-ca\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177611 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177627 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9lh\" (UniqueName: \"kubernetes.io/projected/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-kube-api-access-4g9lh\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177642 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-policies\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177657 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-audit-dir\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177671 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177688 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177711 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c4af101-d225-4613-8ab7-82268dc3bc62-serving-cert\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.177815 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.178160 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.178291 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.178367 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.178443 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.178520 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.178590 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.178758 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.180646 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.180837 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.181501 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.181649 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.181827 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.181934 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.182134 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.182247 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.182426 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.182434 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.182650 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.183260 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.184122 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.184365 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.184585 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.184690 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.189209 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.190851 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.175006 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.191954 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9tmxl"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.192239 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.192521 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.192543 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.193238 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jndpr"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.193596 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.193631 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.193756 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.193989 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mvkf2"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.194134 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.194311 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.194359 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.195817 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.213246 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.217846 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.218569 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.220269 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.220703 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.221358 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.222448 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.222635 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.222819 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.223038 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.223448 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.224627 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.225080 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.225465 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.225504 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.225742 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.225903 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.231007 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.239086 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.249249 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-8lfpm"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.250148 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.250238 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.250782 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.250984 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.251804 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.251872 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.253947 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.254915 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.255599 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.256107 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.256380 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.256947 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6dhbx"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.257760 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.260175 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.261070 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.261087 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.261587 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.261737 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.262700 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.263312 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.263622 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.263795 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.264168 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.264840 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dtbvf"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.266410 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.266506 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.267037 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.268250 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.269340 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.270124 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-69fdx"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.270617 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.271250 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cdwhw"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.273086 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dtpxb"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.273245 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.276045 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5cm97"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.276792 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7vqvf"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.276893 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278225 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2a3eda-b55b-46a8-8196-de125ec180a3-config\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278263 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxkhh\" (UniqueName: \"kubernetes.io/projected/6a709775-a67f-4f9e-813b-03b0089f0ca5-kube-api-access-gxkhh\") pod \"downloads-7954f5f757-6qn85\" (UID: \"6a709775-a67f-4f9e-813b-03b0089f0ca5\") " pod="openshift-console/downloads-7954f5f757-6qn85" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278288 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m657x\" (UniqueName: \"kubernetes.io/projected/3a2a3eda-b55b-46a8-8196-de125ec180a3-kube-api-access-m657x\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278310 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9g7r\" (UniqueName: \"kubernetes.io/projected/8c4af101-d225-4613-8ab7-82268dc3bc62-kube-api-access-j9g7r\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278328 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-dir\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278344 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-trusted-ca-bundle\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278373 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-etcd-serving-ca\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278414 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278430 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g9lh\" (UniqueName: \"kubernetes.io/projected/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-kube-api-access-4g9lh\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278449 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-policies\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278465 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-audit-dir\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278484 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278504 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278532 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c4af101-d225-4613-8ab7-82268dc3bc62-serving-cert\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278546 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-image-import-ca\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278564 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/511f6221-7c79-4345-bc67-677a14b028fb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4plfr\" (UID: \"511f6221-7c79-4345-bc67-677a14b028fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278579 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-client-ca\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278600 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82772df5-6a23-4099-9db9-43750e3c55c2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnlfz\" (UID: \"82772df5-6a23-4099-9db9-43750e3c55c2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278620 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zczq\" (UniqueName: \"kubernetes.io/projected/511f6221-7c79-4345-bc67-677a14b028fb-kube-api-access-8zczq\") pod \"openshift-apiserver-operator-796bbdcf4f-4plfr\" (UID: \"511f6221-7c79-4345-bc67-677a14b028fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278640 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-config\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278660 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a2a3eda-b55b-46a8-8196-de125ec180a3-auth-proxy-config\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278685 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-etcd-client\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278702 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-encryption-config\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278728 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffhw8\" (UniqueName: \"kubernetes.io/projected/8d2dcba3-ec97-4c74-838c-a77d8661cd32-kube-api-access-ffhw8\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278747 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511f6221-7c79-4345-bc67-677a14b028fb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4plfr\" (UID: \"511f6221-7c79-4345-bc67-677a14b028fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278764 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278785 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d2dcba3-ec97-4c74-838c-a77d8661cd32-config\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278803 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-service-ca\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278821 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278849 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-config\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278866 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qhl8\" (UniqueName: \"kubernetes.io/projected/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-kube-api-access-4qhl8\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278883 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278900 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-serving-cert\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278918 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdk85\" (UniqueName: \"kubernetes.io/projected/651e7770-2e16-4d27-9fd6-30e281eba126-kube-api-access-sdk85\") pod \"openshift-config-operator-7777fb866f-zn2x5\" (UID: \"651e7770-2e16-4d27-9fd6-30e281eba126\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278933 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3a2a3eda-b55b-46a8-8196-de125ec180a3-machine-approver-tls\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278959 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phznf\" (UniqueName: \"kubernetes.io/projected/dc33924c-840f-497c-ad04-657d6fa573a9-kube-api-access-phznf\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278975 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.278991 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c4af101-d225-4613-8ab7-82268dc3bc62-config\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4af101-d225-4613-8ab7-82268dc3bc62-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279053 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-oauth-serving-cert\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279069 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-audit\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279090 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279108 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651e7770-2e16-4d27-9fd6-30e281eba126-serving-cert\") pod \"openshift-config-operator-7777fb866f-zn2x5\" (UID: \"651e7770-2e16-4d27-9fd6-30e281eba126\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279124 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f785f172-fe51-4984-a2c8-fb228244202b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lsnwk\" (UID: \"f785f172-fe51-4984-a2c8-fb228244202b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279141 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82772df5-6a23-4099-9db9-43750e3c55c2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnlfz\" (UID: \"82772df5-6a23-4099-9db9-43750e3c55c2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279158 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279187 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279205 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4af101-d225-4613-8ab7-82268dc3bc62-service-ca-bundle\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279230 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279246 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279263 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d2dcba3-ec97-4c74-838c-a77d8661cd32-serving-cert\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.279527 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.280353 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-audit-dir\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.280458 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c4af101-d225-4613-8ab7-82268dc3bc62-config\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.280672 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.280828 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281073 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72pq2\" (UniqueName: \"kubernetes.io/projected/82772df5-6a23-4099-9db9-43750e3c55c2-kube-api-access-72pq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnlfz\" (UID: \"82772df5-6a23-4099-9db9-43750e3c55c2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281104 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-serving-cert\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281122 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d2dcba3-ec97-4c74-838c-a77d8661cd32-trusted-ca\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281144 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sk4s\" (UniqueName: \"kubernetes.io/projected/f785f172-fe51-4984-a2c8-fb228244202b-kube-api-access-7sk4s\") pod \"cluster-samples-operator-665b6dd947-lsnwk\" (UID: \"f785f172-fe51-4984-a2c8-fb228244202b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281163 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-oauth-config\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281180 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281198 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwq5\" (UniqueName: \"kubernetes.io/projected/2603dc30-08f8-4a0c-946f-4d4f971fae56-kube-api-access-bpwq5\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281216 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-console-config\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281233 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-node-pullsecrets\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281251 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/651e7770-2e16-4d27-9fd6-30e281eba126-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zn2x5\" (UID: \"651e7770-2e16-4d27-9fd6-30e281eba126\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281267 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-serving-cert\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281660 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4af101-d225-4613-8ab7-82268dc3bc62-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281712 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2a3eda-b55b-46a8-8196-de125ec180a3-config\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.281960 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.282225 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-dir\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.283135 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.283830 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-service-ca\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.283981 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511f6221-7c79-4345-bc67-677a14b028fb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4plfr\" (UID: \"511f6221-7c79-4345-bc67-677a14b028fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.284352 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.284669 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d2dcba3-ec97-4c74-838c-a77d8661cd32-config\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.284738 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-trusted-ca-bundle\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.285816 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-etcd-serving-ca\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.286664 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-m7f55"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.286713 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pqn5k"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.286726 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.286878 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.287862 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.287940 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3a2a3eda-b55b-46a8-8196-de125ec180a3-machine-approver-tls\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.288568 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-config\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.289046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-policies\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.289084 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a2a3eda-b55b-46a8-8196-de125ec180a3-auth-proxy-config\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.289646 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-oauth-serving-cert\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.289660 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-etcd-client\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.290066 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-audit\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.291265 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-console-config\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.291397 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-node-pullsecrets\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.291576 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d2dcba3-ec97-4c74-838c-a77d8661cd32-trusted-ca\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.291645 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-t96dq"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.292257 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.298688 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-config\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.298792 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jndpr"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.299072 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.299198 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/651e7770-2e16-4d27-9fd6-30e281eba126-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zn2x5\" (UID: \"651e7770-2e16-4d27-9fd6-30e281eba126\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.299515 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-image-import-ca\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.300334 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f785f172-fe51-4984-a2c8-fb228244202b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lsnwk\" (UID: \"f785f172-fe51-4984-a2c8-fb228244202b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.300701 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mvkf2"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.300774 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-oauth-config\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.300918 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.301323 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-client-ca\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.302135 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/511f6221-7c79-4345-bc67-677a14b028fb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4plfr\" (UID: \"511f6221-7c79-4345-bc67-677a14b028fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.302413 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-encryption-config\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.302543 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.303066 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651e7770-2e16-4d27-9fd6-30e281eba126-serving-cert\") pod \"openshift-config-operator-7777fb866f-zn2x5\" (UID: \"651e7770-2e16-4d27-9fd6-30e281eba126\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.303379 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.303787 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c4af101-d225-4613-8ab7-82268dc3bc62-serving-cert\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.304328 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-serving-cert\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.304690 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.306064 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4af101-d225-4613-8ab7-82268dc3bc62-service-ca-bundle\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.306494 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-serving-cert\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.306912 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-serving-cert\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.307095 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.311618 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.312967 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d2dcba3-ec97-4c74-838c-a77d8661cd32-serving-cert\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.315904 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.315958 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.316205 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-whpq8"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.317419 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.321393 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.322928 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.322960 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.336204 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9tmxl"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.336246 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.336259 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6qn85"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.336692 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.339269 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.340349 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.340508 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.341423 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bxj5s"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.342346 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bxj5s" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.343115 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.344467 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.345670 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-h5bhs"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.346543 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.347253 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.348207 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.349950 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6dhbx"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.350386 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.351659 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.353038 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wcq9d"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.354114 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.355275 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.356405 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5cm97"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.357566 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bxj5s"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.358598 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.360813 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.360988 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.362824 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cdwhw"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.363046 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-whpq8"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.363988 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.365048 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-69fdx"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.366302 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wjbf9"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.367601 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wjbf9"] Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.367734 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.380670 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.382085 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82772df5-6a23-4099-9db9-43750e3c55c2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnlfz\" (UID: \"82772df5-6a23-4099-9db9-43750e3c55c2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.382224 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72pq2\" (UniqueName: \"kubernetes.io/projected/82772df5-6a23-4099-9db9-43750e3c55c2-kube-api-access-72pq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnlfz\" (UID: \"82772df5-6a23-4099-9db9-43750e3c55c2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.382399 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82772df5-6a23-4099-9db9-43750e3c55c2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnlfz\" (UID: \"82772df5-6a23-4099-9db9-43750e3c55c2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.382776 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82772df5-6a23-4099-9db9-43750e3c55c2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnlfz\" (UID: \"82772df5-6a23-4099-9db9-43750e3c55c2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.385459 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82772df5-6a23-4099-9db9-43750e3c55c2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnlfz\" (UID: \"82772df5-6a23-4099-9db9-43750e3c55c2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.401363 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.420209 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.460807 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.480977 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.500533 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.521851 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.540588 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.569561 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.581345 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.601162 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.620721 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.661426 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.682067 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.702217 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.721749 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.741947 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.762611 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.781179 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.801589 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.831536 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.841696 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.861438 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.881600 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.902221 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.921924 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.940855 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.961899 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 06 11:47:33 crc kubenswrapper[4698]: I1006 11:47:33.982219 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.002432 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.021571 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.040421 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.061687 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.081154 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.100701 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.121409 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.141301 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.162546 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.181345 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.201110 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.220480 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.242630 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.262312 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.278909 4698 request.go:700] Waited for 1.016900104s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0 Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.283269 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.302849 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.322831 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.342374 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.361599 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.383078 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.401145 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.421688 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.442940 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.461698 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.481382 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.501067 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.521842 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.542147 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.574219 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.582110 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.602318 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.621672 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.640784 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.660968 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.681141 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.700798 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.721532 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.742374 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.762594 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.783108 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.801338 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.823096 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.841708 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.893324 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qhl8\" (UniqueName: \"kubernetes.io/projected/bb98fe0e-cb74-471a-b7c7-1430c86e64b8-kube-api-access-4qhl8\") pod \"apiserver-76f77b778f-t96dq\" (UID: \"bb98fe0e-cb74-471a-b7c7-1430c86e64b8\") " pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.902172 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.909734 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxkhh\" (UniqueName: \"kubernetes.io/projected/6a709775-a67f-4f9e-813b-03b0089f0ca5-kube-api-access-gxkhh\") pod \"downloads-7954f5f757-6qn85\" (UID: \"6a709775-a67f-4f9e-813b-03b0089f0ca5\") " pod="openshift-console/downloads-7954f5f757-6qn85" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.942477 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.954236 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m657x\" (UniqueName: \"kubernetes.io/projected/3a2a3eda-b55b-46a8-8196-de125ec180a3-kube-api-access-m657x\") pod \"machine-approver-56656f9798-6hj4x\" (UID: \"3a2a3eda-b55b-46a8-8196-de125ec180a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.972132 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6qn85" Oct 06 11:47:34 crc kubenswrapper[4698]: I1006 11:47:34.993732 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9g7r\" (UniqueName: \"kubernetes.io/projected/8c4af101-d225-4613-8ab7-82268dc3bc62-kube-api-access-j9g7r\") pod \"authentication-operator-69f744f599-dtpxb\" (UID: \"8c4af101-d225-4613-8ab7-82268dc3bc62\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.010567 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phznf\" (UniqueName: \"kubernetes.io/projected/dc33924c-840f-497c-ad04-657d6fa573a9-kube-api-access-phznf\") pod \"console-f9d7485db-dtbvf\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.024378 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.031437 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zczq\" (UniqueName: \"kubernetes.io/projected/511f6221-7c79-4345-bc67-677a14b028fb-kube-api-access-8zczq\") pod \"openshift-apiserver-operator-796bbdcf4f-4plfr\" (UID: \"511f6221-7c79-4345-bc67-677a14b028fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.056710 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffhw8\" (UniqueName: \"kubernetes.io/projected/8d2dcba3-ec97-4c74-838c-a77d8661cd32-kube-api-access-ffhw8\") pod \"console-operator-58897d9998-m7f55\" (UID: \"8d2dcba3-ec97-4c74-838c-a77d8661cd32\") " pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.063882 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.072838 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g9lh\" (UniqueName: \"kubernetes.io/projected/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-kube-api-access-4g9lh\") pod \"controller-manager-879f6c89f-7vqvf\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.081622 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdk85\" (UniqueName: \"kubernetes.io/projected/651e7770-2e16-4d27-9fd6-30e281eba126-kube-api-access-sdk85\") pod \"openshift-config-operator-7777fb866f-zn2x5\" (UID: \"651e7770-2e16-4d27-9fd6-30e281eba126\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.082592 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.113235 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpwq5\" (UniqueName: \"kubernetes.io/projected/2603dc30-08f8-4a0c-946f-4d4f971fae56-kube-api-access-bpwq5\") pod \"oauth-openshift-558db77b4-wcq9d\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.123384 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.128172 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sk4s\" (UniqueName: \"kubernetes.io/projected/f785f172-fe51-4984-a2c8-fb228244202b-kube-api-access-7sk4s\") pod \"cluster-samples-operator-665b6dd947-lsnwk\" (UID: \"f785f172-fe51-4984-a2c8-fb228244202b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.134610 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.141813 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.142490 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.150344 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.156076 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.162486 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.181662 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.203847 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.225077 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.241640 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.246365 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.248969 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" event={"ID":"3a2a3eda-b55b-46a8-8196-de125ec180a3","Type":"ContainerStarted","Data":"7ea3fbdb59ae5813dcaec0c03ad3131d00690abdd1aeab5fed2888664597b6ac"} Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.268565 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.277323 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6qn85"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.279097 4698 request.go:700] Waited for 1.932263267s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.281124 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.301302 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.314615 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.321423 4698 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.341319 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.354508 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dtbvf"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.356241 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.362056 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.363681 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dtpxb"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.405814 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72pq2\" (UniqueName: \"kubernetes.io/projected/82772df5-6a23-4099-9db9-43750e3c55c2-kube-api-access-72pq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnlfz\" (UID: \"82772df5-6a23-4099-9db9-43750e3c55c2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:35 crc kubenswrapper[4698]: W1006 11:47:35.412694 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c4af101_d225_4613_8ab7_82268dc3bc62.slice/crio-b627e291b3a941600e1efebb819a98301a20fa089903a94bc794ad6d460da20f WatchSource:0}: Error finding container b627e291b3a941600e1efebb819a98301a20fa089903a94bc794ad6d460da20f: Status 404 returned error can't find the container with id b627e291b3a941600e1efebb819a98301a20fa089903a94bc794ad6d460da20f Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.447690 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.461301 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.520743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521165 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57c68812-98ea-4f4b-955a-8252578da54f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gjlzr\" (UID: \"57c68812-98ea-4f4b-955a-8252578da54f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521205 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ad9f70e1-77ed-474a-b816-0060897e95bc-encryption-config\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521252 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-etcd-service-ca\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521277 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6lmj\" (UniqueName: \"kubernetes.io/projected/ad9f70e1-77ed-474a-b816-0060897e95bc-kube-api-access-j6lmj\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521298 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/788a08c7-1586-4847-a98d-3152493bcfb8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gsgp2\" (UID: \"788a08c7-1586-4847-a98d-3152493bcfb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521346 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-etcd-ca\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521367 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-tls\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521410 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvzh\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-kube-api-access-lvvzh\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521430 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-serving-cert\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521453 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/788a08c7-1586-4847-a98d-3152493bcfb8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gsgp2\" (UID: \"788a08c7-1586-4847-a98d-3152493bcfb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521499 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0e71697-cacc-4345-b37e-50e35c09f278-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sf56h\" (UID: \"c0e71697-cacc-4345-b37e-50e35c09f278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521517 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-trusted-ca\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521539 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d96fa0d-a03c-44ca-827b-cd0cc390f5a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xgsdk\" (UID: \"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521559 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ad9f70e1-77ed-474a-b816-0060897e95bc-audit-policies\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521643 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92j85\" (UniqueName: \"kubernetes.io/projected/0116b858-1992-495a-8522-457552954e56-kube-api-access-92j85\") pod \"dns-operator-744455d44c-pqn5k\" (UID: \"0116b858-1992-495a-8522-457552954e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521664 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dece9d7f-879d-44ed-8264-a0ba4788e4e0-metrics-certs\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521726 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j82jr\" (UniqueName: \"kubernetes.io/projected/68a1c3b6-484b-4230-8a85-19152744b843-kube-api-access-j82jr\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521763 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ad9f70e1-77ed-474a-b816-0060897e95bc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57c68812-98ea-4f4b-955a-8252578da54f-proxy-tls\") pod \"machine-config-controller-84d6567774-gjlzr\" (UID: \"57c68812-98ea-4f4b-955a-8252578da54f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521862 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68a1c3b6-484b-4230-8a85-19152744b843-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521881 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nmfx\" (UniqueName: \"kubernetes.io/projected/1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb-kube-api-access-5nmfx\") pod \"multus-admission-controller-857f4d67dd-jndpr\" (UID: \"1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521968 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dece9d7f-879d-44ed-8264-a0ba4788e4e0-stats-auth\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.521989 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t594s\" (UniqueName: \"kubernetes.io/projected/c0e71697-cacc-4345-b37e-50e35c09f278-kube-api-access-t594s\") pod \"kube-storage-version-migrator-operator-b67b599dd-sf56h\" (UID: \"c0e71697-cacc-4345-b37e-50e35c09f278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.522024 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ecbf158d-99db-46c0-84e8-a71879e9f56f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.522042 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jndpr\" (UID: \"1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.522060 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dece9d7f-879d-44ed-8264-a0ba4788e4e0-service-ca-bundle\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.522116 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad9f70e1-77ed-474a-b816-0060897e95bc-audit-dir\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.522140 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kths\" (UniqueName: \"kubernetes.io/projected/dece9d7f-879d-44ed-8264-a0ba4788e4e0-kube-api-access-6kths\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.522157 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788a08c7-1586-4847-a98d-3152493bcfb8-config\") pod \"kube-apiserver-operator-766d6c64bb-gsgp2\" (UID: \"788a08c7-1586-4847-a98d-3152493bcfb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.526592 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/68a1c3b6-484b-4230-8a85-19152744b843-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.526662 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxnl4\" (UniqueName: \"kubernetes.io/projected/5e2f12e2-e22c-4943-97f7-53338837e37b-kube-api-access-kxnl4\") pod \"migrator-59844c95c7-ftbcf\" (UID: \"5e2f12e2-e22c-4943-97f7-53338837e37b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.526717 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dece9d7f-879d-44ed-8264-a0ba4788e4e0-default-certificate\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.526756 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e71697-cacc-4345-b37e-50e35c09f278-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sf56h\" (UID: \"c0e71697-cacc-4345-b37e-50e35c09f278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.526821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ecbf158d-99db-46c0-84e8-a71879e9f56f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.526851 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d96fa0d-a03c-44ca-827b-cd0cc390f5a4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xgsdk\" (UID: \"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.526875 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0116b858-1992-495a-8522-457552954e56-metrics-tls\") pod \"dns-operator-744455d44c-pqn5k\" (UID: \"0116b858-1992-495a-8522-457552954e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.527070 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kj62\" (UniqueName: \"kubernetes.io/projected/57c68812-98ea-4f4b-955a-8252578da54f-kube-api-access-7kj62\") pod \"machine-config-controller-84d6567774-gjlzr\" (UID: \"57c68812-98ea-4f4b-955a-8252578da54f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.527153 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-etcd-client\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.527225 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-config\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.533515 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad9f70e1-77ed-474a-b816-0060897e95bc-serving-cert\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.533618 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad9f70e1-77ed-474a-b816-0060897e95bc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.533769 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d96fa0d-a03c-44ca-827b-cd0cc390f5a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xgsdk\" (UID: \"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.533882 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-bound-sa-token\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.534244 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-certificates\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.534353 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68a1c3b6-484b-4230-8a85-19152744b843-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.534384 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ad9f70e1-77ed-474a-b816-0060897e95bc-etcd-client\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.534472 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmgvv\" (UniqueName: \"kubernetes.io/projected/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-kube-api-access-vmgvv\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: E1006 11:47:35.541583 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.041553813 +0000 UTC m=+143.454245986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.578161 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.617803 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.635942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.636077 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7vqvf"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.636210 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwdmt\" (UniqueName: \"kubernetes.io/projected/bce49935-ff6b-4266-a77e-1a1377b739d7-kube-api-access-wwdmt\") pod \"machine-config-server-h5bhs\" (UID: \"bce49935-ff6b-4266-a77e-1a1377b739d7\") " pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.636256 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6lmj\" (UniqueName: \"kubernetes.io/projected/ad9f70e1-77ed-474a-b816-0060897e95bc-kube-api-access-j6lmj\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.636282 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjr9l\" (UniqueName: \"kubernetes.io/projected/8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e-kube-api-access-fjr9l\") pod \"ingress-canary-bxj5s\" (UID: \"8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e\") " pod="openshift-ingress-canary/ingress-canary-bxj5s" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.636305 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctzp4\" (UniqueName: \"kubernetes.io/projected/2ecabdc0-bd56-4f58-b619-32c52a2ade73-kube-api-access-ctzp4\") pod \"marketplace-operator-79b997595-69fdx\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:35 crc kubenswrapper[4698]: E1006 11:47:35.636384 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.13635215 +0000 UTC m=+143.549044603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.636612 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eef5ed90-dd02-478f-8038-4970199b1cac-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.636740 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvzh\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-kube-api-access-lvvzh\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.636798 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-serving-cert\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637084 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v69hw\" (UniqueName: \"kubernetes.io/projected/e96bd49d-b945-43be-8811-999cf2a20e20-kube-api-access-v69hw\") pod \"package-server-manager-789f6589d5-cjl22\" (UID: \"e96bd49d-b945-43be-8811-999cf2a20e20\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637199 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-client-ca\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637233 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv95t\" (UniqueName: \"kubernetes.io/projected/c51a9b0f-7c30-4d46-8b1c-f248ce31b955-kube-api-access-qv95t\") pod \"control-plane-machine-set-operator-78cbb6b69f-2j94r\" (UID: \"c51a9b0f-7c30-4d46-8b1c-f248ce31b955\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637307 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-config\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637335 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/788a08c7-1586-4847-a98d-3152493bcfb8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gsgp2\" (UID: \"788a08c7-1586-4847-a98d-3152493bcfb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637407 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/51b3ecb4-5e79-4fda-963b-a968c5274189-profile-collector-cert\") pod \"catalog-operator-68c6474976-btwhx\" (UID: \"51b3ecb4-5e79-4fda-963b-a968c5274189\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637451 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a0a30575-9201-4f0c-8ca4-c651b7a72151-signing-key\") pod \"service-ca-9c57cc56f-cdwhw\" (UID: \"a0a30575-9201-4f0c-8ca4-c651b7a72151\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637486 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-trusted-ca\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637535 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ad9f70e1-77ed-474a-b816-0060897e95bc-audit-policies\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637602 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j82jr\" (UniqueName: \"kubernetes.io/projected/68a1c3b6-484b-4230-8a85-19152744b843-kube-api-access-j82jr\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637693 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57c68812-98ea-4f4b-955a-8252578da54f-proxy-tls\") pod \"machine-config-controller-84d6567774-gjlzr\" (UID: \"57c68812-98ea-4f4b-955a-8252578da54f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637717 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68a1c3b6-484b-4230-8a85-19152744b843-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637780 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dece9d7f-879d-44ed-8264-a0ba4788e4e0-stats-auth\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637808 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-plugins-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637865 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ecbf158d-99db-46c0-84e8-a71879e9f56f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637895 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dece9d7f-879d-44ed-8264-a0ba4788e4e0-service-ca-bundle\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637943 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9qt\" (UniqueName: \"kubernetes.io/projected/e1fc648b-f253-4be1-b2dc-e7d86ad8fc07-kube-api-access-2j9qt\") pod \"olm-operator-6b444d44fb-q974b\" (UID: \"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.637971 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dphf6\" (UniqueName: \"kubernetes.io/projected/1279167d-d379-4629-985d-d16d070765ab-kube-api-access-dphf6\") pod \"service-ca-operator-777779d784-5cm97\" (UID: \"1279167d-d379-4629-985d-d16d070765ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.638058 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kths\" (UniqueName: \"kubernetes.io/projected/dece9d7f-879d-44ed-8264-a0ba4788e4e0-kube-api-access-6kths\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.638108 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788a08c7-1586-4847-a98d-3152493bcfb8-config\") pod \"kube-apiserver-operator-766d6c64bb-gsgp2\" (UID: \"788a08c7-1586-4847-a98d-3152493bcfb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.638148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxnl4\" (UniqueName: \"kubernetes.io/projected/5e2f12e2-e22c-4943-97f7-53338837e37b-kube-api-access-kxnl4\") pod \"migrator-59844c95c7-ftbcf\" (UID: \"5e2f12e2-e22c-4943-97f7-53338837e37b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.638280 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e71697-cacc-4345-b37e-50e35c09f278-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sf56h\" (UID: \"c0e71697-cacc-4345-b37e-50e35c09f278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.638311 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt8h4\" (UniqueName: \"kubernetes.io/projected/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-kube-api-access-vt8h4\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.638455 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ecbf158d-99db-46c0-84e8-a71879e9f56f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.638595 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0116b858-1992-495a-8522-457552954e56-metrics-tls\") pod \"dns-operator-744455d44c-pqn5k\" (UID: \"0116b858-1992-495a-8522-457552954e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.638628 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-mountpoint-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.641938 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-tmpfs\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.642078 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a0a30575-9201-4f0c-8ca4-c651b7a72151-signing-cabundle\") pod \"service-ca-9c57cc56f-cdwhw\" (UID: \"a0a30575-9201-4f0c-8ca4-c651b7a72151\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.642229 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-config\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.642566 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad9f70e1-77ed-474a-b816-0060897e95bc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.643303 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bce49935-ff6b-4266-a77e-1a1377b739d7-certs\") pod \"machine-config-server-h5bhs\" (UID: \"bce49935-ff6b-4266-a77e-1a1377b739d7\") " pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.643453 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1279167d-d379-4629-985d-d16d070765ab-config\") pod \"service-ca-operator-777779d784-5cm97\" (UID: \"1279167d-d379-4629-985d-d16d070765ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.643501 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d96fa0d-a03c-44ca-827b-cd0cc390f5a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xgsdk\" (UID: \"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.643626 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-registration-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.643665 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-certificates\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.643806 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-bound-sa-token\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.644035 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eef5ed90-dd02-478f-8038-4970199b1cac-images\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.644075 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e7c473e8-5028-4727-b307-00e23db260e5-images\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.645188 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57c68812-98ea-4f4b-955a-8252578da54f-proxy-tls\") pod \"machine-config-controller-84d6567774-gjlzr\" (UID: \"57c68812-98ea-4f4b-955a-8252578da54f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.647884 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-serving-cert\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.648095 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-config\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.648336 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/788a08c7-1586-4847-a98d-3152493bcfb8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gsgp2\" (UID: \"788a08c7-1586-4847-a98d-3152493bcfb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.648431 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-trusted-ca\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.648621 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ad9f70e1-77ed-474a-b816-0060897e95bc-etcd-client\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.649083 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmgvv\" (UniqueName: \"kubernetes.io/projected/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-kube-api-access-vmgvv\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.649186 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-metrics-tls\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.649379 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.649434 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-config-volume\") pod \"collect-profiles-29329185-5vkh9\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.649498 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-etcd-service-ca\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.649566 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e1fc648b-f253-4be1-b2dc-e7d86ad8fc07-profile-collector-cert\") pod \"olm-operator-6b444d44fb-q974b\" (UID: \"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.649605 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7c473e8-5028-4727-b307-00e23db260e5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: E1006 11:47:35.649670 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.149655669 +0000 UTC m=+143.562347842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.650586 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-etcd-service-ca\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.650603 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr4dg\" (UniqueName: \"kubernetes.io/projected/7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1-kube-api-access-pr4dg\") pod \"dns-default-whpq8\" (UID: \"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1\") " pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.650663 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28872def-5be0-4810-9e03-4e06cc15a51f-config\") pod \"kube-controller-manager-operator-78b949d7b-xhm7g\" (UID: \"28872def-5be0-4810-9e03-4e06cc15a51f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.650748 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-etcd-ca\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.650796 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/788a08c7-1586-4847-a98d-3152493bcfb8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gsgp2\" (UID: \"788a08c7-1586-4847-a98d-3152493bcfb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.650847 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-69fdx\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.650873 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njgw4\" (UniqueName: \"kubernetes.io/projected/a0a30575-9201-4f0c-8ca4-c651b7a72151-kube-api-access-njgw4\") pod \"service-ca-9c57cc56f-cdwhw\" (UID: \"a0a30575-9201-4f0c-8ca4-c651b7a72151\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.650904 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-tls\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651215 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1-metrics-tls\") pod \"dns-default-whpq8\" (UID: \"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1\") " pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651288 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e1fc648b-f253-4be1-b2dc-e7d86ad8fc07-srv-cert\") pod \"olm-operator-6b444d44fb-q974b\" (UID: \"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651338 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0e71697-cacc-4345-b37e-50e35c09f278-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sf56h\" (UID: \"c0e71697-cacc-4345-b37e-50e35c09f278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651363 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k4ps\" (UniqueName: \"kubernetes.io/projected/51b3ecb4-5e79-4fda-963b-a968c5274189-kube-api-access-7k4ps\") pod \"catalog-operator-68c6474976-btwhx\" (UID: \"51b3ecb4-5e79-4fda-963b-a968c5274189\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651460 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef5ed90-dd02-478f-8038-4970199b1cac-config\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651488 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d96fa0d-a03c-44ca-827b-cd0cc390f5a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xgsdk\" (UID: \"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651505 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/51b3ecb4-5e79-4fda-963b-a968c5274189-srv-cert\") pod \"catalog-operator-68c6474976-btwhx\" (UID: \"51b3ecb4-5e79-4fda-963b-a968c5274189\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651544 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-webhook-cert\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651568 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-69fdx\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651594 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92j85\" (UniqueName: \"kubernetes.io/projected/0116b858-1992-495a-8522-457552954e56-kube-api-access-92j85\") pod \"dns-operator-744455d44c-pqn5k\" (UID: \"0116b858-1992-495a-8522-457552954e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651619 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dece9d7f-879d-44ed-8264-a0ba4788e4e0-metrics-certs\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.651711 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-socket-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.652350 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e71697-cacc-4345-b37e-50e35c09f278-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sf56h\" (UID: \"c0e71697-cacc-4345-b37e-50e35c09f278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.652399 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-etcd-ca\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.652572 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0e71697-cacc-4345-b37e-50e35c09f278-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sf56h\" (UID: \"c0e71697-cacc-4345-b37e-50e35c09f278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.653088 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ad9f70e1-77ed-474a-b816-0060897e95bc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.653129 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqxg2\" (UniqueName: \"kubernetes.io/projected/eef5ed90-dd02-478f-8038-4970199b1cac-kube-api-access-nqxg2\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.653494 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad9f70e1-77ed-474a-b816-0060897e95bc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.653636 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28872def-5be0-4810-9e03-4e06cc15a51f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xhm7g\" (UID: \"28872def-5be0-4810-9e03-4e06cc15a51f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.653688 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nmfx\" (UniqueName: \"kubernetes.io/projected/1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb-kube-api-access-5nmfx\") pod \"multus-admission-controller-857f4d67dd-jndpr\" (UID: \"1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.653821 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788a08c7-1586-4847-a98d-3152493bcfb8-config\") pod \"kube-apiserver-operator-766d6c64bb-gsgp2\" (UID: \"788a08c7-1586-4847-a98d-3152493bcfb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.653847 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c51a9b0f-7c30-4d46-8b1c-f248ce31b955-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2j94r\" (UID: \"c51a9b0f-7c30-4d46-8b1c-f248ce31b955\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.653919 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t594s\" (UniqueName: \"kubernetes.io/projected/c0e71697-cacc-4345-b37e-50e35c09f278-kube-api-access-t594s\") pod \"kube-storage-version-migrator-operator-b67b599dd-sf56h\" (UID: \"c0e71697-cacc-4345-b37e-50e35c09f278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654007 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jndpr\" (UID: \"1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654297 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad9f70e1-77ed-474a-b816-0060897e95bc-audit-dir\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654341 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e96bd49d-b945-43be-8811-999cf2a20e20-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cjl22\" (UID: \"e96bd49d-b945-43be-8811-999cf2a20e20\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654365 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654386 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtcd\" (UniqueName: \"kubernetes.io/projected/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-kube-api-access-hjtcd\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654407 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czk4p\" (UniqueName: \"kubernetes.io/projected/dcdad66d-8a87-4f84-99d4-a6380a737895-kube-api-access-czk4p\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654434 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e7c473e8-5028-4727-b307-00e23db260e5-proxy-tls\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654460 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bce49935-ff6b-4266-a77e-1a1377b739d7-node-bootstrap-token\") pod \"machine-config-server-h5bhs\" (UID: \"bce49935-ff6b-4266-a77e-1a1377b739d7\") " pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654481 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e-cert\") pod \"ingress-canary-bxj5s\" (UID: \"8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e\") " pod="openshift-ingress-canary/ingress-canary-bxj5s" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654504 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8czf\" (UniqueName: \"kubernetes.io/projected/e7c473e8-5028-4727-b307-00e23db260e5-kube-api-access-b8czf\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.654534 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad9f70e1-77ed-474a-b816-0060897e95bc-audit-dir\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.656593 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/68a1c3b6-484b-4230-8a85-19152744b843-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.657383 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dece9d7f-879d-44ed-8264-a0ba4788e4e0-default-certificate\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.657479 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-secret-volume\") pod \"collect-profiles-29329185-5vkh9\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.657527 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d96fa0d-a03c-44ca-827b-cd0cc390f5a4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xgsdk\" (UID: \"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.657558 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kj62\" (UniqueName: \"kubernetes.io/projected/57c68812-98ea-4f4b-955a-8252578da54f-kube-api-access-7kj62\") pod \"machine-config-controller-84d6567774-gjlzr\" (UID: \"57c68812-98ea-4f4b-955a-8252578da54f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.657685 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-etcd-client\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.658215 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jndpr\" (UID: \"1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.658697 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad9f70e1-77ed-474a-b816-0060897e95bc-serving-cert\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.658905 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1279167d-d379-4629-985d-d16d070765ab-serving-cert\") pod \"service-ca-operator-777779d784-5cm97\" (UID: \"1279167d-d379-4629-985d-d16d070765ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.658983 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68a1c3b6-484b-4230-8a85-19152744b843-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.659085 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-csi-data-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.659153 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6kbk\" (UniqueName: \"kubernetes.io/projected/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-kube-api-access-w6kbk\") pod \"collect-profiles-29329185-5vkh9\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.659191 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.659253 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1-config-volume\") pod \"dns-default-whpq8\" (UID: \"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1\") " pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.659313 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28872def-5be0-4810-9e03-4e06cc15a51f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xhm7g\" (UID: \"28872def-5be0-4810-9e03-4e06cc15a51f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.659381 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57c68812-98ea-4f4b-955a-8252578da54f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gjlzr\" (UID: \"57c68812-98ea-4f4b-955a-8252578da54f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.659470 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ad9f70e1-77ed-474a-b816-0060897e95bc-encryption-config\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.659880 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.659916 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvc5h\" (UniqueName: \"kubernetes.io/projected/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-kube-api-access-cvc5h\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.662912 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ad9f70e1-77ed-474a-b816-0060897e95bc-etcd-client\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.662992 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-etcd-client\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.663219 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ad9f70e1-77ed-474a-b816-0060897e95bc-encryption-config\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.663967 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad9f70e1-77ed-474a-b816-0060897e95bc-serving-cert\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.664094 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/68a1c3b6-484b-4230-8a85-19152744b843-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.666060 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ecbf158d-99db-46c0-84e8-a71879e9f56f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.666422 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dece9d7f-879d-44ed-8264-a0ba4788e4e0-service-ca-bundle\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.666526 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57c68812-98ea-4f4b-955a-8252578da54f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gjlzr\" (UID: \"57c68812-98ea-4f4b-955a-8252578da54f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.668366 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-certificates\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.668623 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-trusted-ca\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.669210 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d96fa0d-a03c-44ca-827b-cd0cc390f5a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xgsdk\" (UID: \"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.669342 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ecbf158d-99db-46c0-84e8-a71879e9f56f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.669791 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-tls\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.669836 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dece9d7f-879d-44ed-8264-a0ba4788e4e0-stats-auth\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.670957 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d96fa0d-a03c-44ca-827b-cd0cc390f5a4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xgsdk\" (UID: \"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.671320 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68a1c3b6-484b-4230-8a85-19152744b843-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.676855 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dece9d7f-879d-44ed-8264-a0ba4788e4e0-default-certificate\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.677019 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dece9d7f-879d-44ed-8264-a0ba4788e4e0-metrics-certs\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.677780 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ad9f70e1-77ed-474a-b816-0060897e95bc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.679354 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wcq9d"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.683547 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ad9f70e1-77ed-474a-b816-0060897e95bc-audit-policies\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.684200 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6lmj\" (UniqueName: \"kubernetes.io/projected/ad9f70e1-77ed-474a-b816-0060897e95bc-kube-api-access-j6lmj\") pod \"apiserver-7bbb656c7d-6vgpq\" (UID: \"ad9f70e1-77ed-474a-b816-0060897e95bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.685181 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.686492 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0116b858-1992-495a-8522-457552954e56-metrics-tls\") pod \"dns-operator-744455d44c-pqn5k\" (UID: \"0116b858-1992-495a-8522-457552954e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.710821 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-m7f55"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.712258 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-t96dq"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.712668 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvzh\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-kube-api-access-lvvzh\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.717358 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d96fa0d-a03c-44ca-827b-cd0cc390f5a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xgsdk\" (UID: \"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:35 crc kubenswrapper[4698]: W1006 11:47:35.725534 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb98fe0e_cb74_471a_b7c7_1430c86e64b8.slice/crio-58fcf131e6ad8a191584dae5c29d3e1a74b3ddc78815eefbb5b94b4332ce3a0c WatchSource:0}: Error finding container 58fcf131e6ad8a191584dae5c29d3e1a74b3ddc78815eefbb5b94b4332ce3a0c: Status 404 returned error can't find the container with id 58fcf131e6ad8a191584dae5c29d3e1a74b3ddc78815eefbb5b94b4332ce3a0c Oct 06 11:47:35 crc kubenswrapper[4698]: W1006 11:47:35.729814 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d2dcba3_ec97_4c74_838c_a77d8661cd32.slice/crio-e9dc9ee80a09f0b5bbff00e589a74d59ea8d7cc9c95071f6b4066153955a4c5c WatchSource:0}: Error finding container e9dc9ee80a09f0b5bbff00e589a74d59ea8d7cc9c95071f6b4066153955a4c5c: Status 404 returned error can't find the container with id e9dc9ee80a09f0b5bbff00e589a74d59ea8d7cc9c95071f6b4066153955a4c5c Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.735305 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-bound-sa-token\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.750730 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz"] Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760522 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760655 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1279167d-d379-4629-985d-d16d070765ab-serving-cert\") pod \"service-ca-operator-777779d784-5cm97\" (UID: \"1279167d-d379-4629-985d-d16d070765ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760694 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-csi-data-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760716 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6kbk\" (UniqueName: \"kubernetes.io/projected/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-kube-api-access-w6kbk\") pod \"collect-profiles-29329185-5vkh9\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760734 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1-config-volume\") pod \"dns-default-whpq8\" (UID: \"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1\") " pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760771 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28872def-5be0-4810-9e03-4e06cc15a51f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xhm7g\" (UID: \"28872def-5be0-4810-9e03-4e06cc15a51f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760791 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760809 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvc5h\" (UniqueName: \"kubernetes.io/projected/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-kube-api-access-cvc5h\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760827 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwdmt\" (UniqueName: \"kubernetes.io/projected/bce49935-ff6b-4266-a77e-1a1377b739d7-kube-api-access-wwdmt\") pod \"machine-config-server-h5bhs\" (UID: \"bce49935-ff6b-4266-a77e-1a1377b739d7\") " pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760847 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjr9l\" (UniqueName: \"kubernetes.io/projected/8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e-kube-api-access-fjr9l\") pod \"ingress-canary-bxj5s\" (UID: \"8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e\") " pod="openshift-ingress-canary/ingress-canary-bxj5s" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760867 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctzp4\" (UniqueName: \"kubernetes.io/projected/2ecabdc0-bd56-4f58-b619-32c52a2ade73-kube-api-access-ctzp4\") pod \"marketplace-operator-79b997595-69fdx\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760888 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eef5ed90-dd02-478f-8038-4970199b1cac-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v69hw\" (UniqueName: \"kubernetes.io/projected/e96bd49d-b945-43be-8811-999cf2a20e20-kube-api-access-v69hw\") pod \"package-server-manager-789f6589d5-cjl22\" (UID: \"e96bd49d-b945-43be-8811-999cf2a20e20\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760926 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-client-ca\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760945 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv95t\" (UniqueName: \"kubernetes.io/projected/c51a9b0f-7c30-4d46-8b1c-f248ce31b955-kube-api-access-qv95t\") pod \"control-plane-machine-set-operator-78cbb6b69f-2j94r\" (UID: \"c51a9b0f-7c30-4d46-8b1c-f248ce31b955\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760962 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-config\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.760986 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/51b3ecb4-5e79-4fda-963b-a968c5274189-profile-collector-cert\") pod \"catalog-operator-68c6474976-btwhx\" (UID: \"51b3ecb4-5e79-4fda-963b-a968c5274189\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761006 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a0a30575-9201-4f0c-8ca4-c651b7a72151-signing-key\") pod \"service-ca-9c57cc56f-cdwhw\" (UID: \"a0a30575-9201-4f0c-8ca4-c651b7a72151\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761075 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-plugins-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761105 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j9qt\" (UniqueName: \"kubernetes.io/projected/e1fc648b-f253-4be1-b2dc-e7d86ad8fc07-kube-api-access-2j9qt\") pod \"olm-operator-6b444d44fb-q974b\" (UID: \"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761125 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dphf6\" (UniqueName: \"kubernetes.io/projected/1279167d-d379-4629-985d-d16d070765ab-kube-api-access-dphf6\") pod \"service-ca-operator-777779d784-5cm97\" (UID: \"1279167d-d379-4629-985d-d16d070765ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761168 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt8h4\" (UniqueName: \"kubernetes.io/projected/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-kube-api-access-vt8h4\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761189 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-mountpoint-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761216 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-tmpfs\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761238 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a0a30575-9201-4f0c-8ca4-c651b7a72151-signing-cabundle\") pod \"service-ca-9c57cc56f-cdwhw\" (UID: \"a0a30575-9201-4f0c-8ca4-c651b7a72151\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:35 crc kubenswrapper[4698]: W1006 11:47:35.761237 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82772df5_6a23_4099_9db9_43750e3c55c2.slice/crio-da98f443023dbb2cfb987e69796db9b3916c3f6ac75ef4094bcaad1279975530 WatchSource:0}: Error finding container da98f443023dbb2cfb987e69796db9b3916c3f6ac75ef4094bcaad1279975530: Status 404 returned error can't find the container with id da98f443023dbb2cfb987e69796db9b3916c3f6ac75ef4094bcaad1279975530 Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761259 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bce49935-ff6b-4266-a77e-1a1377b739d7-certs\") pod \"machine-config-server-h5bhs\" (UID: \"bce49935-ff6b-4266-a77e-1a1377b739d7\") " pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761378 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1279167d-d379-4629-985d-d16d070765ab-config\") pod \"service-ca-operator-777779d784-5cm97\" (UID: \"1279167d-d379-4629-985d-d16d070765ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761416 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-registration-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761466 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eef5ed90-dd02-478f-8038-4970199b1cac-images\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761488 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e7c473e8-5028-4727-b307-00e23db260e5-images\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761543 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-trusted-ca\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761638 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-metrics-tls\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761699 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-config-volume\") pod \"collect-profiles-29329185-5vkh9\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761733 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e1fc648b-f253-4be1-b2dc-e7d86ad8fc07-profile-collector-cert\") pod \"olm-operator-6b444d44fb-q974b\" (UID: \"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:35 crc kubenswrapper[4698]: E1006 11:47:35.761796 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.26176822 +0000 UTC m=+143.674460393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761833 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7c473e8-5028-4727-b307-00e23db260e5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761873 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr4dg\" (UniqueName: \"kubernetes.io/projected/7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1-kube-api-access-pr4dg\") pod \"dns-default-whpq8\" (UID: \"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1\") " pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761894 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28872def-5be0-4810-9e03-4e06cc15a51f-config\") pod \"kube-controller-manager-operator-78b949d7b-xhm7g\" (UID: \"28872def-5be0-4810-9e03-4e06cc15a51f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761934 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-69fdx\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761952 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njgw4\" (UniqueName: \"kubernetes.io/projected/a0a30575-9201-4f0c-8ca4-c651b7a72151-kube-api-access-njgw4\") pod \"service-ca-9c57cc56f-cdwhw\" (UID: \"a0a30575-9201-4f0c-8ca4-c651b7a72151\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761972 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1-metrics-tls\") pod \"dns-default-whpq8\" (UID: \"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1\") " pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761989 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e1fc648b-f253-4be1-b2dc-e7d86ad8fc07-srv-cert\") pod \"olm-operator-6b444d44fb-q974b\" (UID: \"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762026 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k4ps\" (UniqueName: \"kubernetes.io/projected/51b3ecb4-5e79-4fda-963b-a968c5274189-kube-api-access-7k4ps\") pod \"catalog-operator-68c6474976-btwhx\" (UID: \"51b3ecb4-5e79-4fda-963b-a968c5274189\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762049 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef5ed90-dd02-478f-8038-4970199b1cac-config\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762086 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/51b3ecb4-5e79-4fda-963b-a968c5274189-srv-cert\") pod \"catalog-operator-68c6474976-btwhx\" (UID: \"51b3ecb4-5e79-4fda-963b-a968c5274189\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762105 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-webhook-cert\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762123 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-69fdx\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-socket-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762180 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxg2\" (UniqueName: \"kubernetes.io/projected/eef5ed90-dd02-478f-8038-4970199b1cac-kube-api-access-nqxg2\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762204 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28872def-5be0-4810-9e03-4e06cc15a51f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xhm7g\" (UID: \"28872def-5be0-4810-9e03-4e06cc15a51f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762242 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c51a9b0f-7c30-4d46-8b1c-f248ce31b955-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2j94r\" (UID: \"c51a9b0f-7c30-4d46-8b1c-f248ce31b955\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762287 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e7c473e8-5028-4727-b307-00e23db260e5-proxy-tls\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762311 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e96bd49d-b945-43be-8811-999cf2a20e20-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cjl22\" (UID: \"e96bd49d-b945-43be-8811-999cf2a20e20\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762350 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtcd\" (UniqueName: \"kubernetes.io/projected/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-kube-api-access-hjtcd\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762368 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czk4p\" (UniqueName: \"kubernetes.io/projected/dcdad66d-8a87-4f84-99d4-a6380a737895-kube-api-access-czk4p\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762390 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bce49935-ff6b-4266-a77e-1a1377b739d7-node-bootstrap-token\") pod \"machine-config-server-h5bhs\" (UID: \"bce49935-ff6b-4266-a77e-1a1377b739d7\") " pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762407 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e-cert\") pod \"ingress-canary-bxj5s\" (UID: \"8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e\") " pod="openshift-ingress-canary/ingress-canary-bxj5s" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762427 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8czf\" (UniqueName: \"kubernetes.io/projected/e7c473e8-5028-4727-b307-00e23db260e5-kube-api-access-b8czf\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.762450 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-secret-volume\") pod \"collect-profiles-29329185-5vkh9\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.763888 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1279167d-d379-4629-985d-d16d070765ab-config\") pod \"service-ca-operator-777779d784-5cm97\" (UID: \"1279167d-d379-4629-985d-d16d070765ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.764186 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-trusted-ca\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.764907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-config-volume\") pod \"collect-profiles-29329185-5vkh9\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.765262 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eef5ed90-dd02-478f-8038-4970199b1cac-images\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.765616 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-registration-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.766125 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e7c473e8-5028-4727-b307-00e23db260e5-images\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.766224 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1-config-volume\") pod \"dns-default-whpq8\" (UID: \"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1\") " pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.767032 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-socket-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.761697 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-csi-data-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.767703 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-metrics-tls\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.767737 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7c473e8-5028-4727-b307-00e23db260e5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.768412 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28872def-5be0-4810-9e03-4e06cc15a51f-config\") pod \"kube-controller-manager-operator-78b949d7b-xhm7g\" (UID: \"28872def-5be0-4810-9e03-4e06cc15a51f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.769693 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef5ed90-dd02-478f-8038-4970199b1cac-config\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.770482 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-69fdx\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.770773 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-mountpoint-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.771311 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-tmpfs\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.771370 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-config\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.771563 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-client-ca\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.771952 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dcdad66d-8a87-4f84-99d4-a6380a737895-plugins-dir\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.773287 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.773408 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c51a9b0f-7c30-4d46-8b1c-f248ce31b955-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2j94r\" (UID: \"c51a9b0f-7c30-4d46-8b1c-f248ce31b955\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.773704 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a0a30575-9201-4f0c-8ca4-c651b7a72151-signing-cabundle\") pod \"service-ca-9c57cc56f-cdwhw\" (UID: \"a0a30575-9201-4f0c-8ca4-c651b7a72151\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.773892 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1279167d-d379-4629-985d-d16d070765ab-serving-cert\") pod \"service-ca-operator-777779d784-5cm97\" (UID: \"1279167d-d379-4629-985d-d16d070765ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.778991 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eef5ed90-dd02-478f-8038-4970199b1cac-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.779118 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j82jr\" (UniqueName: \"kubernetes.io/projected/68a1c3b6-484b-4230-8a85-19152744b843-kube-api-access-j82jr\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.779489 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e96bd49d-b945-43be-8811-999cf2a20e20-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cjl22\" (UID: \"e96bd49d-b945-43be-8811-999cf2a20e20\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.780493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a0a30575-9201-4f0c-8ca4-c651b7a72151-signing-key\") pod \"service-ca-9c57cc56f-cdwhw\" (UID: \"a0a30575-9201-4f0c-8ca4-c651b7a72151\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.780532 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-69fdx\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.780728 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/51b3ecb4-5e79-4fda-963b-a968c5274189-profile-collector-cert\") pod \"catalog-operator-68c6474976-btwhx\" (UID: \"51b3ecb4-5e79-4fda-963b-a968c5274189\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.781292 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bce49935-ff6b-4266-a77e-1a1377b739d7-certs\") pod \"machine-config-server-h5bhs\" (UID: \"bce49935-ff6b-4266-a77e-1a1377b739d7\") " pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.781349 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.781768 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-webhook-cert\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.782482 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1-metrics-tls\") pod \"dns-default-whpq8\" (UID: \"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1\") " pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.782662 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.783634 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bce49935-ff6b-4266-a77e-1a1377b739d7-node-bootstrap-token\") pod \"machine-config-server-h5bhs\" (UID: \"bce49935-ff6b-4266-a77e-1a1377b739d7\") " pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.784189 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e1fc648b-f253-4be1-b2dc-e7d86ad8fc07-profile-collector-cert\") pod \"olm-operator-6b444d44fb-q974b\" (UID: \"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.785955 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e1fc648b-f253-4be1-b2dc-e7d86ad8fc07-srv-cert\") pod \"olm-operator-6b444d44fb-q974b\" (UID: \"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.786092 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/51b3ecb4-5e79-4fda-963b-a968c5274189-srv-cert\") pod \"catalog-operator-68c6474976-btwhx\" (UID: \"51b3ecb4-5e79-4fda-963b-a968c5274189\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.787649 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e7c473e8-5028-4727-b307-00e23db260e5-proxy-tls\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.787680 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-secret-volume\") pod \"collect-profiles-29329185-5vkh9\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.787717 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e-cert\") pod \"ingress-canary-bxj5s\" (UID: \"8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e\") " pod="openshift-ingress-canary/ingress-canary-bxj5s" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.797781 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28872def-5be0-4810-9e03-4e06cc15a51f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xhm7g\" (UID: \"28872def-5be0-4810-9e03-4e06cc15a51f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.801535 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxnl4\" (UniqueName: \"kubernetes.io/projected/5e2f12e2-e22c-4943-97f7-53338837e37b-kube-api-access-kxnl4\") pod \"migrator-59844c95c7-ftbcf\" (UID: \"5e2f12e2-e22c-4943-97f7-53338837e37b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.817673 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kths\" (UniqueName: \"kubernetes.io/projected/dece9d7f-879d-44ed-8264-a0ba4788e4e0-kube-api-access-6kths\") pod \"router-default-5444994796-8lfpm\" (UID: \"dece9d7f-879d-44ed-8264-a0ba4788e4e0\") " pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.840714 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmgvv\" (UniqueName: \"kubernetes.io/projected/a3cbda35-4a6e-4df2-8e0b-852355fbdafd-kube-api-access-vmgvv\") pod \"etcd-operator-b45778765-mvkf2\" (UID: \"a3cbda35-4a6e-4df2-8e0b-852355fbdafd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.854479 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.857387 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/788a08c7-1586-4847-a98d-3152493bcfb8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gsgp2\" (UID: \"788a08c7-1586-4847-a98d-3152493bcfb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.864208 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: E1006 11:47:35.864857 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.364840164 +0000 UTC m=+143.777532337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.867865 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.877736 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92j85\" (UniqueName: \"kubernetes.io/projected/0116b858-1992-495a-8522-457552954e56-kube-api-access-92j85\") pod \"dns-operator-744455d44c-pqn5k\" (UID: \"0116b858-1992-495a-8522-457552954e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.897999 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nmfx\" (UniqueName: \"kubernetes.io/projected/1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb-kube-api-access-5nmfx\") pod \"multus-admission-controller-857f4d67dd-jndpr\" (UID: \"1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.923566 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t594s\" (UniqueName: \"kubernetes.io/projected/c0e71697-cacc-4345-b37e-50e35c09f278-kube-api-access-t594s\") pod \"kube-storage-version-migrator-operator-b67b599dd-sf56h\" (UID: \"c0e71697-cacc-4345-b37e-50e35c09f278\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.935122 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kj62\" (UniqueName: \"kubernetes.io/projected/57c68812-98ea-4f4b-955a-8252578da54f-kube-api-access-7kj62\") pod \"machine-config-controller-84d6567774-gjlzr\" (UID: \"57c68812-98ea-4f4b-955a-8252578da54f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:35 crc kubenswrapper[4698]: W1006 11:47:35.951766 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddece9d7f_879d_44ed_8264_a0ba4788e4e0.slice/crio-c3faea43a31d2b10f1acc7bc4cbef39e28e076091d9f80b511df23c92098d08f WatchSource:0}: Error finding container c3faea43a31d2b10f1acc7bc4cbef39e28e076091d9f80b511df23c92098d08f: Status 404 returned error can't find the container with id c3faea43a31d2b10f1acc7bc4cbef39e28e076091d9f80b511df23c92098d08f Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.955996 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68a1c3b6-484b-4230-8a85-19152744b843-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9vpb5\" (UID: \"68a1c3b6-484b-4230-8a85-19152744b843\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.965513 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:35 crc kubenswrapper[4698]: E1006 11:47:35.965652 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.465601851 +0000 UTC m=+143.878294024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.966568 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:35 crc kubenswrapper[4698]: E1006 11:47:35.967144 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.467136106 +0000 UTC m=+143.879828269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:35 crc kubenswrapper[4698]: I1006 11:47:35.999023 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6kbk\" (UniqueName: \"kubernetes.io/projected/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-kube-api-access-w6kbk\") pod \"collect-profiles-29329185-5vkh9\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.019233 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czk4p\" (UniqueName: \"kubernetes.io/projected/dcdad66d-8a87-4f84-99d4-a6380a737895-kube-api-access-czk4p\") pod \"csi-hostpathplugin-wjbf9\" (UID: \"dcdad66d-8a87-4f84-99d4-a6380a737895\") " pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.029311 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.040983 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtcd\" (UniqueName: \"kubernetes.io/projected/e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f-kube-api-access-hjtcd\") pod \"packageserver-d55dfcdfc-qjmrn\" (UID: \"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.065799 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28872def-5be0-4810-9e03-4e06cc15a51f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xhm7g\" (UID: \"28872def-5be0-4810-9e03-4e06cc15a51f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.067220 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.067559 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.567542653 +0000 UTC m=+143.980234826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.074583 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.078596 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxg2\" (UniqueName: \"kubernetes.io/projected/eef5ed90-dd02-478f-8038-4970199b1cac-kube-api-access-nqxg2\") pod \"machine-api-operator-5694c8668f-6dhbx\" (UID: \"eef5ed90-dd02-478f-8038-4970199b1cac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.095686 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.101271 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8czf\" (UniqueName: \"kubernetes.io/projected/e7c473e8-5028-4727-b307-00e23db260e5-kube-api-access-b8czf\") pod \"machine-config-operator-74547568cd-qmf88\" (UID: \"e7c473e8-5028-4727-b307-00e23db260e5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.102973 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.109541 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.117381 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njgw4\" (UniqueName: \"kubernetes.io/projected/a0a30575-9201-4f0c-8ca4-c651b7a72151-kube-api-access-njgw4\") pod \"service-ca-9c57cc56f-cdwhw\" (UID: \"a0a30575-9201-4f0c-8ca4-c651b7a72151\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.117763 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.124579 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.135511 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.143698 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.156327 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.160265 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr4dg\" (UniqueName: \"kubernetes.io/projected/7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1-kube-api-access-pr4dg\") pod \"dns-default-whpq8\" (UID: \"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1\") " pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.164048 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.169547 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.170189 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.670161664 +0000 UTC m=+144.082853837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.196537 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.198048 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k4ps\" (UniqueName: \"kubernetes.io/projected/51b3ecb4-5e79-4fda-963b-a968c5274189-kube-api-access-7k4ps\") pod \"catalog-operator-68c6474976-btwhx\" (UID: \"51b3ecb4-5e79-4fda-963b-a968c5274189\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.211915 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.216107 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjr9l\" (UniqueName: \"kubernetes.io/projected/8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e-kube-api-access-fjr9l\") pod \"ingress-canary-bxj5s\" (UID: \"8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e\") " pod="openshift-ingress-canary/ingress-canary-bxj5s" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.223222 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.224597 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.229224 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j9qt\" (UniqueName: \"kubernetes.io/projected/e1fc648b-f253-4be1-b2dc-e7d86ad8fc07-kube-api-access-2j9qt\") pod \"olm-operator-6b444d44fb-q974b\" (UID: \"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.232904 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.255570 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctzp4\" (UniqueName: \"kubernetes.io/projected/2ecabdc0-bd56-4f58-b619-32c52a2ade73-kube-api-access-ctzp4\") pod \"marketplace-operator-79b997595-69fdx\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.270866 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.271250 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.271617 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.771588901 +0000 UTC m=+144.184281074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.271698 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.272261 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.772253479 +0000 UTC m=+144.184945652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.273499 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwdmt\" (UniqueName: \"kubernetes.io/projected/bce49935-ff6b-4266-a77e-1a1377b739d7-kube-api-access-wwdmt\") pod \"machine-config-server-h5bhs\" (UID: \"bce49935-ff6b-4266-a77e-1a1377b739d7\") " pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.282453 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.284503 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wjbf9"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.286671 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dphf6\" (UniqueName: \"kubernetes.io/projected/1279167d-d379-4629-985d-d16d070765ab-kube-api-access-dphf6\") pod \"service-ca-operator-777779d784-5cm97\" (UID: \"1279167d-d379-4629-985d-d16d070765ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.291051 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.292804 4698 generic.go:334] "Generic (PLEG): container finished" podID="651e7770-2e16-4d27-9fd6-30e281eba126" containerID="9fe9ac9a8b0a79122d78ecb72342f77b694d3cc39f5c18b1460b85886ab1a33e" exitCode=0 Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.292885 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" event={"ID":"651e7770-2e16-4d27-9fd6-30e281eba126","Type":"ContainerDied","Data":"9fe9ac9a8b0a79122d78ecb72342f77b694d3cc39f5c18b1460b85886ab1a33e"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.292911 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" event={"ID":"651e7770-2e16-4d27-9fd6-30e281eba126","Type":"ContainerStarted","Data":"aca63e0d6215cc4a95a546f9f8b5794f318fceaf912bbdd5ec3d719e588b55de"} Oct 06 11:47:36 crc kubenswrapper[4698]: W1006 11:47:36.294070 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad9f70e1_77ed_474a_b816_0060897e95bc.slice/crio-b8e5b3664ce5dc505ee1681876cf008f8cbf0570ff4b590f80425432853de7e5 WatchSource:0}: Error finding container b8e5b3664ce5dc505ee1681876cf008f8cbf0570ff4b590f80425432853de7e5: Status 404 returned error can't find the container with id b8e5b3664ce5dc505ee1681876cf008f8cbf0570ff4b590f80425432853de7e5 Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.295090 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dtbvf" event={"ID":"dc33924c-840f-497c-ad04-657d6fa573a9","Type":"ContainerStarted","Data":"7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.295116 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dtbvf" event={"ID":"dc33924c-840f-497c-ad04-657d6fa573a9","Type":"ContainerStarted","Data":"60b3bcc07e486bdd7bc488a7d872179b8a711a2e2cf664f1891a75541c1003bc"} Oct 06 11:47:36 crc kubenswrapper[4698]: W1006 11:47:36.297111 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcdad66d_8a87_4f84_99d4_a6380a737895.slice/crio-94b574044584690671a2fb07f4c4b22c0b2c6d4c6f346e148b7cb8317ed1d39b WatchSource:0}: Error finding container 94b574044584690671a2fb07f4c4b22c0b2c6d4c6f346e148b7cb8317ed1d39b: Status 404 returned error can't find the container with id 94b574044584690671a2fb07f4c4b22c0b2c6d4c6f346e148b7cb8317ed1d39b Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.298359 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv95t\" (UniqueName: \"kubernetes.io/projected/c51a9b0f-7c30-4d46-8b1c-f248ce31b955-kube-api-access-qv95t\") pod \"control-plane-machine-set-operator-78cbb6b69f-2j94r\" (UID: \"c51a9b0f-7c30-4d46-8b1c-f248ce31b955\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.298753 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bxj5s" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.302061 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6qn85" event={"ID":"6a709775-a67f-4f9e-813b-03b0089f0ca5","Type":"ContainerStarted","Data":"7d43727998116571101cbedab4446be42a5ed0ae23545c52ac3148dec14e39b0"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.302102 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6qn85" event={"ID":"6a709775-a67f-4f9e-813b-03b0089f0ca5","Type":"ContainerStarted","Data":"77bccc8602423075a6e8c94f4d6d7b43eeef64e397d29d30ce338b91396b6f34"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.302407 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6qn85" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.310386 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-6qn85 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.310440 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6qn85" podUID="6a709775-a67f-4f9e-813b-03b0089f0ca5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.312180 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h5bhs" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.325169 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt8h4\" (UniqueName: \"kubernetes.io/projected/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-kube-api-access-vt8h4\") pod \"route-controller-manager-6576b87f9c-9bcwc\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.336211 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v69hw\" (UniqueName: \"kubernetes.io/projected/e96bd49d-b945-43be-8811-999cf2a20e20-kube-api-access-v69hw\") pod \"package-server-manager-789f6589d5-cjl22\" (UID: \"e96bd49d-b945-43be-8811-999cf2a20e20\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.347122 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" event={"ID":"bb98fe0e-cb74-471a-b7c7-1430c86e64b8","Type":"ContainerStarted","Data":"58fcf131e6ad8a191584dae5c29d3e1a74b3ddc78815eefbb5b94b4332ce3a0c"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.352711 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" event={"ID":"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1","Type":"ContainerStarted","Data":"f9973e5f96cbbf6edd31572849fec410f8dff1df0ca5aba557b583170e194c9e"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.352750 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" event={"ID":"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1","Type":"ContainerStarted","Data":"819d4fe8316bec3e1e751d6b3be93ac162c8a8f1c85a89b8abeee81f096c2d9b"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.353249 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.359927 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvc5h\" (UniqueName: \"kubernetes.io/projected/a529be58-d9fb-4e76-a2d3-7cafbd5a6829-kube-api-access-cvc5h\") pod \"ingress-operator-5b745b69d9-9npqg\" (UID: \"a529be58-d9fb-4e76-a2d3-7cafbd5a6829\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.359630 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7vqvf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.360815 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" podUID="4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.363114 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" event={"ID":"3a2a3eda-b55b-46a8-8196-de125ec180a3","Type":"ContainerStarted","Data":"2a19f9418dfb26a2a878d944b52476857777746663ebeb8094374631404b653c"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.366148 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-m7f55" event={"ID":"8d2dcba3-ec97-4c74-838c-a77d8661cd32","Type":"ContainerStarted","Data":"9ff944e56a84d6017cc04753b3ecc9f8d7137703e583218bab1bcb4756572b37"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.366176 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-m7f55" event={"ID":"8d2dcba3-ec97-4c74-838c-a77d8661cd32","Type":"ContainerStarted","Data":"e9dc9ee80a09f0b5bbff00e589a74d59ea8d7cc9c95071f6b4066153955a4c5c"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.367104 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.368881 4698 patch_prober.go:28] interesting pod/console-operator-58897d9998-m7f55 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.368915 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-m7f55" podUID="8d2dcba3-ec97-4c74-838c-a77d8661cd32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.378736 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.378810 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pqn5k"] Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.379107 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.87906742 +0000 UTC m=+144.291759593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.406427 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" event={"ID":"82772df5-6a23-4099-9db9-43750e3c55c2","Type":"ContainerStarted","Data":"da98f443023dbb2cfb987e69796db9b3916c3f6ac75ef4094bcaad1279975530"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.409041 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" event={"ID":"8c4af101-d225-4613-8ab7-82268dc3bc62","Type":"ContainerStarted","Data":"1add37cb1191d7a86c80d517622001ab3da693f53ef8b76cf50f518e9c6d8c41"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.409070 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" event={"ID":"8c4af101-d225-4613-8ab7-82268dc3bc62","Type":"ContainerStarted","Data":"b627e291b3a941600e1efebb819a98301a20fa089903a94bc794ad6d460da20f"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.420484 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8lfpm" event={"ID":"dece9d7f-879d-44ed-8264-a0ba4788e4e0","Type":"ContainerStarted","Data":"c3faea43a31d2b10f1acc7bc4cbef39e28e076091d9f80b511df23c92098d08f"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.424786 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" event={"ID":"f785f172-fe51-4984-a2c8-fb228244202b","Type":"ContainerStarted","Data":"896d31e58190daa29590b305547c487a23c9f33de1d6ec13efdd664747af5876"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.424853 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" event={"ID":"f785f172-fe51-4984-a2c8-fb228244202b","Type":"ContainerStarted","Data":"6ff78a7df662b610f5e20a9580a158f92d254c0315fd7a8cfe7e4dc78ae5e58f"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.438850 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" event={"ID":"2603dc30-08f8-4a0c-946f-4d4f971fae56","Type":"ContainerStarted","Data":"7d68bda39eb9097540b22cdde7c074520bc3200d312307cdc5f23f6d0a80841e"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.446891 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" event={"ID":"511f6221-7c79-4345-bc67-677a14b028fb","Type":"ContainerStarted","Data":"97959f875d439867eb8e6c8fd6ea51fbada944939ade0d8df7e14ddd51f85215"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.446925 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" event={"ID":"511f6221-7c79-4345-bc67-677a14b028fb","Type":"ContainerStarted","Data":"5952a59106f866192fa9b66f8cb0f46451f9617d70cdbf01501f1a54b465275c"} Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.450864 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.460231 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.476911 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.481804 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.486589 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.488151 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:36.988126554 +0000 UTC m=+144.400818727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.499745 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.514993 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.539542 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.556141 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.573222 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.583206 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.584673 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.084642111 +0000 UTC m=+144.497334284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.684877 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.685399 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.185374459 +0000 UTC m=+144.598066632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.707224 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.734504 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.792100 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.792274 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.292220069 +0000 UTC m=+144.704912242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.792913 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.794710 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.29470045 +0000 UTC m=+144.707392623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.806072 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.807805 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mvkf2"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.845065 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.866706 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jndpr"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.869546 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g"] Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.896696 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.897281 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.397223098 +0000 UTC m=+144.809915271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.897605 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.898164 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.398147455 +0000 UTC m=+144.810839628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:36 crc kubenswrapper[4698]: I1006 11:47:36.999152 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:36 crc kubenswrapper[4698]: E1006 11:47:36.999463 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.499415567 +0000 UTC m=+144.912107740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.014159 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.015193 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.515176955 +0000 UTC m=+144.927869128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.091098 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-dtbvf" podStartSLOduration=123.091068605 podStartE2EDuration="2m3.091068605s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:37.070386577 +0000 UTC m=+144.483078750" watchObservedRunningTime="2025-10-06 11:47:37.091068605 +0000 UTC m=+144.503760798" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.115988 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.117357 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.616990383 +0000 UTC m=+145.029682556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: W1006 11:47:37.150769 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod788a08c7_1586_4847_a98d_3152493bcfb8.slice/crio-b9892b3667a3bd86b002a0bf3c5d1f5b02e52548ac020c95f31df6a1b64ddc5d WatchSource:0}: Error finding container b9892b3667a3bd86b002a0bf3c5d1f5b02e52548ac020c95f31df6a1b64ddc5d: Status 404 returned error can't find the container with id b9892b3667a3bd86b002a0bf3c5d1f5b02e52548ac020c95f31df6a1b64ddc5d Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.217594 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.218077 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.71806026 +0000 UTC m=+145.130752433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.296668 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-m7f55" podStartSLOduration=123.296643476 podStartE2EDuration="2m3.296643476s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:37.294859896 +0000 UTC m=+144.707552079" watchObservedRunningTime="2025-10-06 11:47:37.296643476 +0000 UTC m=+144.709335659" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.325874 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.326232 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.826206207 +0000 UTC m=+145.238898380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.326508 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.326961 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.826924118 +0000 UTC m=+145.239616281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.427504 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.427701 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.927668366 +0000 UTC m=+145.340360539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.427851 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.428268 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:37.928254532 +0000 UTC m=+145.340946705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.531393 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.532194 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.032149049 +0000 UTC m=+145.444841222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.583144 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.583685 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.083668886 +0000 UTC m=+145.496361059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.607653 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" event={"ID":"1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb","Type":"ContainerStarted","Data":"b200b86e0a28a89df5d8b1358a7df2abed5cccc20399330b58dc8320b9f17819"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.618757 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" event={"ID":"68a1c3b6-484b-4230-8a85-19152744b843","Type":"ContainerStarted","Data":"dc11c39953639e05e1c776d02db744ea9aa119481587a5df5ce22db2de6130e3"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.622999 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" event={"ID":"a3cbda35-4a6e-4df2-8e0b-852355fbdafd","Type":"ContainerStarted","Data":"c8e4e0fe37203a8d18682de5cde1ea715d2950e125c791d2899c906435baff9e"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.649800 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" event={"ID":"3a2a3eda-b55b-46a8-8196-de125ec180a3","Type":"ContainerStarted","Data":"90a54e314bd40a968f76d53b8c245caa51c2ec3232b91686dc2605c753370130"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.674353 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" event={"ID":"f785f172-fe51-4984-a2c8-fb228244202b","Type":"ContainerStarted","Data":"ef3ddb98c9e924eefc0389b09295829c7238846d548254261e0d422d4158201e"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.683954 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.684485 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.184463354 +0000 UTC m=+145.597155527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.700296 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4plfr" podStartSLOduration=123.700274874 podStartE2EDuration="2m3.700274874s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:37.673713838 +0000 UTC m=+145.086406011" watchObservedRunningTime="2025-10-06 11:47:37.700274874 +0000 UTC m=+145.112967047" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.698450 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" event={"ID":"c0e71697-cacc-4345-b37e-50e35c09f278","Type":"ContainerStarted","Data":"a6a3c5763b6d78d42caff700e587fb2084b128573d9d1406cfbc52dd5283bb80"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.722301 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" event={"ID":"57c68812-98ea-4f4b-955a-8252578da54f","Type":"ContainerStarted","Data":"0edb7e1deeba621816d439286431925bf61da02234c0c1351d3236b006ab871f"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.723794 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" event={"ID":"28872def-5be0-4810-9e03-4e06cc15a51f","Type":"ContainerStarted","Data":"1d1453b52e22e0a4ee694aff7084980db0a037e1dda2deb2fa0d292ce3e1ea6f"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.727187 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" event={"ID":"dcdad66d-8a87-4f84-99d4-a6380a737895","Type":"ContainerStarted","Data":"94b574044584690671a2fb07f4c4b22c0b2c6d4c6f346e148b7cb8317ed1d39b"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.766340 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" podStartSLOduration=123.766312885 podStartE2EDuration="2m3.766312885s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:37.7647721 +0000 UTC m=+145.177464273" watchObservedRunningTime="2025-10-06 11:47:37.766312885 +0000 UTC m=+145.179005058" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.770030 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8lfpm" event={"ID":"dece9d7f-879d-44ed-8264-a0ba4788e4e0","Type":"ContainerStarted","Data":"db08206a817688e062a62d746a57a1e0990471048dc38650d0cc7e4da8a7329f"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.777761 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" event={"ID":"2603dc30-08f8-4a0c-946f-4d4f971fae56","Type":"ContainerStarted","Data":"8271b123d9471caebd1720e13a9c86e6017bdfba1b52e8941b3d5f50b79146a6"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.778604 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.785198 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.786788 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.286772326 +0000 UTC m=+145.699464489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.820185 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dtpxb" podStartSLOduration=123.820157007 podStartE2EDuration="2m3.820157007s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:37.796717389 +0000 UTC m=+145.209409562" watchObservedRunningTime="2025-10-06 11:47:37.820157007 +0000 UTC m=+145.232849180" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.836215 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" event={"ID":"788a08c7-1586-4847-a98d-3152493bcfb8","Type":"ContainerStarted","Data":"b9892b3667a3bd86b002a0bf3c5d1f5b02e52548ac020c95f31df6a1b64ddc5d"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.846139 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h5bhs" event={"ID":"bce49935-ff6b-4266-a77e-1a1377b739d7","Type":"ContainerStarted","Data":"69f5121666a8d6278e64dabed9386a58fd355c4134644acef5e66d4e110c41db"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.849677 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" event={"ID":"ad9f70e1-77ed-474a-b816-0060897e95bc","Type":"ContainerStarted","Data":"b8e5b3664ce5dc505ee1681876cf008f8cbf0570ff4b590f80425432853de7e5"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.857917 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" event={"ID":"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4","Type":"ContainerStarted","Data":"a44801c2e4c2db23d804a2be13fd62207808a02a692012fc55897c7a1f22cda7"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.866614 4698 generic.go:334] "Generic (PLEG): container finished" podID="bb98fe0e-cb74-471a-b7c7-1430c86e64b8" containerID="a84b959cff5c5ebfc90854029906d728a4cf6367b2a9ca59d536529cde647ef3" exitCode=0 Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.866715 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" event={"ID":"bb98fe0e-cb74-471a-b7c7-1430c86e64b8","Type":"ContainerDied","Data":"a84b959cff5c5ebfc90854029906d728a4cf6367b2a9ca59d536529cde647ef3"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.868348 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.881401 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf" event={"ID":"5e2f12e2-e22c-4943-97f7-53338837e37b","Type":"ContainerStarted","Data":"eed58ba301e29137d13c8359fdc6f717f6c1cb3cbbeba8707ca63f606413e2dc"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.884536 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" event={"ID":"0116b858-1992-495a-8522-457552954e56","Type":"ContainerStarted","Data":"ee5ec91f1ee81dc241994267684dad5036aeff648c589a4b0a3a52102b2e0145"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.886713 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.886879 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.386849185 +0000 UTC m=+145.799541358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.887135 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.887564 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.387544775 +0000 UTC m=+145.800236948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.901741 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" event={"ID":"651e7770-2e16-4d27-9fd6-30e281eba126","Type":"ContainerStarted","Data":"118eeee79dbd56679ae38e2d0b743e1b1f907c6d2397cd187e6d5a9b88cd2d2b"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.902998 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.910796 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" event={"ID":"82772df5-6a23-4099-9db9-43750e3c55c2","Type":"ContainerStarted","Data":"e4ef661d0a747edbdd2687790e41e6bede2e4024036a355baefe4ecfbea228f9"} Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.920428 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-6qn85 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.920467 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6qn85" podUID="6a709775-a67f-4f9e-813b-03b0089f0ca5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.927851 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-m7f55" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.935248 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.995460 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:37 crc kubenswrapper[4698]: E1006 11:47:37.996314 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.49627083 +0000 UTC m=+145.908963013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:37 crc kubenswrapper[4698]: I1006 11:47:37.997279 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:38 crc kubenswrapper[4698]: E1006 11:47:38.007977 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.507956562 +0000 UTC m=+145.920648735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.102922 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:38 crc kubenswrapper[4698]: E1006 11:47:38.103235 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.603176072 +0000 UTC m=+146.015868255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.103644 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.118366 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8lfpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:47:38 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 06 11:47:38 crc kubenswrapper[4698]: [+]process-running ok Oct 06 11:47:38 crc kubenswrapper[4698]: healthz check failed Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.118469 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8lfpm" podUID="dece9d7f-879d-44ed-8264-a0ba4788e4e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:47:38 crc kubenswrapper[4698]: E1006 11:47:38.153729 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.65369575 +0000 UTC m=+146.066387923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.177751 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6qn85" podStartSLOduration=124.177728524 podStartE2EDuration="2m4.177728524s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:38.118984842 +0000 UTC m=+145.531677015" watchObservedRunningTime="2025-10-06 11:47:38.177728524 +0000 UTC m=+145.590420687" Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.247480 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:38 crc kubenswrapper[4698]: E1006 11:47:38.248790 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.748766116 +0000 UTC m=+146.161458289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.264864 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88"] Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.300092 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.353493 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:38 crc kubenswrapper[4698]: E1006 11:47:38.354005 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.853986641 +0000 UTC m=+146.266678814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:38 crc kubenswrapper[4698]: W1006 11:47:38.397225 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7c473e8_5028_4727_b307_00e23db260e5.slice/crio-e1f86b9a16adbd13e75992ad25c305dfb78cf8c267a44d43dcf547e467c73580 WatchSource:0}: Error finding container e1f86b9a16adbd13e75992ad25c305dfb78cf8c267a44d43dcf547e467c73580: Status 404 returned error can't find the container with id e1f86b9a16adbd13e75992ad25c305dfb78cf8c267a44d43dcf547e467c73580 Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.420047 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-whpq8"] Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.451857 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6dhbx"] Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.458627 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:38 crc kubenswrapper[4698]: E1006 11:47:38.459124 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:38.959094612 +0000 UTC m=+146.371786785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.490135 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9"] Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.509239 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" podStartSLOduration=124.509210449 podStartE2EDuration="2m4.509210449s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:38.440263507 +0000 UTC m=+145.852955680" watchObservedRunningTime="2025-10-06 11:47:38.509210449 +0000 UTC m=+145.921902622" Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.510666 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-8lfpm" podStartSLOduration=124.51065983 podStartE2EDuration="2m4.51065983s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:38.478801993 +0000 UTC m=+145.891494166" watchObservedRunningTime="2025-10-06 11:47:38.51065983 +0000 UTC m=+145.923352003" Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.527204 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnlfz" podStartSLOduration=124.52717894 podStartE2EDuration="2m4.52717894s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:38.524153664 +0000 UTC m=+145.936845837" watchObservedRunningTime="2025-10-06 11:47:38.52717894 +0000 UTC m=+145.939871113" Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.527432 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn"] Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.560573 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:38 crc kubenswrapper[4698]: E1006 11:47:38.560957 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:39.060942221 +0000 UTC m=+146.473634394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.583609 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lsnwk" podStartSLOduration=124.583581415 podStartE2EDuration="2m4.583581415s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:38.559069908 +0000 UTC m=+145.971762081" watchObservedRunningTime="2025-10-06 11:47:38.583581415 +0000 UTC m=+145.996273578" Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.588623 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg"] Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.589130 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bxj5s"] Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.591373 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r"] Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.626000 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6hj4x" podStartSLOduration=125.625973583 podStartE2EDuration="2m5.625973583s" podCreationTimestamp="2025-10-06 11:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:38.618625033 +0000 UTC m=+146.031317206" watchObservedRunningTime="2025-10-06 11:47:38.625973583 +0000 UTC m=+146.038665756" Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.680297 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:38 crc kubenswrapper[4698]: E1006 11:47:38.680715 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:39.18069609 +0000 UTC m=+146.593388263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.692300 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cdwhw"] Oct 06 11:47:38 crc kubenswrapper[4698]: W1006 11:47:38.764707 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc51a9b0f_7c30_4d46_8b1c_f248ce31b955.slice/crio-bb5db382440e5a5f62d32cf1d0ec82ba962f965d556eb957146c2f28ee4e4189 WatchSource:0}: Error finding container bb5db382440e5a5f62d32cf1d0ec82ba962f965d556eb957146c2f28ee4e4189: Status 404 returned error can't find the container with id bb5db382440e5a5f62d32cf1d0ec82ba962f965d556eb957146c2f28ee4e4189 Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.770581 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" podStartSLOduration=124.770560758 podStartE2EDuration="2m4.770560758s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:38.722062097 +0000 UTC m=+146.134754270" watchObservedRunningTime="2025-10-06 11:47:38.770560758 +0000 UTC m=+146.183252921" Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.814646 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:38 crc kubenswrapper[4698]: E1006 11:47:38.816385 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:39.316368082 +0000 UTC m=+146.729060255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.895477 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8lfpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:47:38 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 06 11:47:38 crc kubenswrapper[4698]: [+]process-running ok Oct 06 11:47:38 crc kubenswrapper[4698]: healthz check failed Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.895563 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8lfpm" podUID="dece9d7f-879d-44ed-8264-a0ba4788e4e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.917100 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:38 crc kubenswrapper[4698]: E1006 11:47:38.917544 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:39.417522181 +0000 UTC m=+146.830214354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.937437 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22"] Oct 06 11:47:38 crc kubenswrapper[4698]: I1006 11:47:38.954920 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx"] Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.019626 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:39 crc kubenswrapper[4698]: E1006 11:47:39.020497 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:39.520483871 +0000 UTC m=+146.933176044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.033099 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bxj5s" event={"ID":"8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e","Type":"ContainerStarted","Data":"6fe4fd56ab6ad7117a744668e83505386c8c83d3c6e68cfd09cff8718954627e"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.038779 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" event={"ID":"b6a6c7dd-61b1-4609-a50d-bba142afd5f6","Type":"ContainerStarted","Data":"334ac383799443ed165a817c13928a0038564ce1995d18bad44d4771d69c225d"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.056933 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf" event={"ID":"5e2f12e2-e22c-4943-97f7-53338837e37b","Type":"ContainerStarted","Data":"fc8168523507b5719f7f626551682d2f4cee7dcc2ef4fb9c680583756626e9d0"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.057038 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf" event={"ID":"5e2f12e2-e22c-4943-97f7-53338837e37b","Type":"ContainerStarted","Data":"925fd9c92f82b7c86ddb5a81384646d53a91492df70e21853ae2ebb2f01b62d9"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.103592 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" event={"ID":"a0a30575-9201-4f0c-8ca4-c651b7a72151","Type":"ContainerStarted","Data":"eb00578119ec74ed2a57e749f5a1b0753476a01794881711cd56bfa906e58328"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.105484 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ftbcf" podStartSLOduration=125.10547073 podStartE2EDuration="2m5.10547073s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:39.10404781 +0000 UTC m=+146.516739983" watchObservedRunningTime="2025-10-06 11:47:39.10547073 +0000 UTC m=+146.518162893" Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.122309 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:39 crc kubenswrapper[4698]: E1006 11:47:39.122859 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:39.622837864 +0000 UTC m=+147.035530037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.122965 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:39 crc kubenswrapper[4698]: E1006 11:47:39.123327 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:39.623318398 +0000 UTC m=+147.036010571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.135276 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc"] Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.167365 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5cm97"] Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.170617 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" event={"ID":"e7c473e8-5028-4727-b307-00e23db260e5","Type":"ContainerStarted","Data":"e1f86b9a16adbd13e75992ad25c305dfb78cf8c267a44d43dcf547e467c73580"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.171127 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b"] Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.209527 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-69fdx"] Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.214244 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" event={"ID":"a3cbda35-4a6e-4df2-8e0b-852355fbdafd","Type":"ContainerStarted","Data":"89317f59cb39099bd3863ffa56d1ca30dd8531d1dfd9610f2f45b9e01502b597"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.223832 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:39 crc kubenswrapper[4698]: E1006 11:47:39.224382 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:39.724353894 +0000 UTC m=+147.137046067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:39 crc kubenswrapper[4698]: W1006 11:47:39.289827 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1279167d_d379_4629_985d_d16d070765ab.slice/crio-367131b8284ef33e1e6c1b05890399ddd5f3b57211b2e477386f22bcea9ec8b6 WatchSource:0}: Error finding container 367131b8284ef33e1e6c1b05890399ddd5f3b57211b2e477386f22bcea9ec8b6: Status 404 returned error can't find the container with id 367131b8284ef33e1e6c1b05890399ddd5f3b57211b2e477386f22bcea9ec8b6 Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.290203 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" event={"ID":"57c68812-98ea-4f4b-955a-8252578da54f","Type":"ContainerStarted","Data":"9fdb6bfb7b1757671620218da83053785521455b8b102fb966c42a5fb720ec3d"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.290430 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" event={"ID":"57c68812-98ea-4f4b-955a-8252578da54f","Type":"ContainerStarted","Data":"6af465e37634243b569f8aed2167adcbf1a6b9ecd9df5795a72277a6366c657a"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.312467 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" event={"ID":"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f","Type":"ContainerStarted","Data":"32dc297cb567d7d5d4a3dff54bb5917df281ba3507c15c0525c456ed32ed07f4"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.325447 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:39 crc kubenswrapper[4698]: E1006 11:47:39.330510 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:39.830484655 +0000 UTC m=+147.243176828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.333633 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-mvkf2" podStartSLOduration=125.333606473 podStartE2EDuration="2m5.333606473s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:39.260158472 +0000 UTC m=+146.672850645" watchObservedRunningTime="2025-10-06 11:47:39.333606473 +0000 UTC m=+146.746298646" Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.371964 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" event={"ID":"0116b858-1992-495a-8522-457552954e56","Type":"ContainerStarted","Data":"da0703f32e11a78e18f942844b3f47274deeac393fe530ea652660d2fe9c0729"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.377871 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h5bhs" event={"ID":"bce49935-ff6b-4266-a77e-1a1377b739d7","Type":"ContainerStarted","Data":"80ccdfed484c1ab28e01556d75477b7ccbe56b9fc99bf5be3d22db0ef0ac3668"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.385885 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" event={"ID":"28872def-5be0-4810-9e03-4e06cc15a51f","Type":"ContainerStarted","Data":"be4e77fac5e675e80d9e9c9dcd3e20beb4a017a4570593319a9d6654602676f6"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.414502 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-h5bhs" podStartSLOduration=6.414483056 podStartE2EDuration="6.414483056s" podCreationTimestamp="2025-10-06 11:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:39.411796659 +0000 UTC m=+146.824488842" watchObservedRunningTime="2025-10-06 11:47:39.414483056 +0000 UTC m=+146.827175229" Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.414705 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gjlzr" podStartSLOduration=125.414701122 podStartE2EDuration="2m5.414701122s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:39.333235453 +0000 UTC m=+146.745927626" watchObservedRunningTime="2025-10-06 11:47:39.414701122 +0000 UTC m=+146.827393295" Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.414978 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-whpq8" event={"ID":"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1","Type":"ContainerStarted","Data":"fd78634b064a9c0650007773b7e922cea845209cfb33bca3dce9d1ee8c89be4a"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.434305 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:39 crc kubenswrapper[4698]: E1006 11:47:39.435542 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:39.935516164 +0000 UTC m=+147.348208337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.446892 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xhm7g" podStartSLOduration=125.446856347 podStartE2EDuration="2m5.446856347s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:39.436604854 +0000 UTC m=+146.849297027" watchObservedRunningTime="2025-10-06 11:47:39.446856347 +0000 UTC m=+146.859548520" Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.473771 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" event={"ID":"c51a9b0f-7c30-4d46-8b1c-f248ce31b955","Type":"ContainerStarted","Data":"bb5db382440e5a5f62d32cf1d0ec82ba962f965d556eb957146c2f28ee4e4189"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.488225 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" event={"ID":"c0e71697-cacc-4345-b37e-50e35c09f278","Type":"ContainerStarted","Data":"a5a0afcf8fafd7e968ed2fb7ed9997102407b7a62eabc6af75c40947f0d420fc"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.521712 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" event={"ID":"1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb","Type":"ContainerStarted","Data":"315d1c9f79332d7cbb3ccb486765e69dd795e9416734a29d628b0433544885a6"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.535495 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:39 crc kubenswrapper[4698]: E1006 11:47:39.536043 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:40.036008654 +0000 UTC m=+147.448700827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.568320 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" event={"ID":"788a08c7-1586-4847-a98d-3152493bcfb8","Type":"ContainerStarted","Data":"48a63badcf62587acc869c62253170862e36d412afd51a9780b6128d90ee1628"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.590418 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsgp2" podStartSLOduration=125.590396592 podStartE2EDuration="2m5.590396592s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:39.588949181 +0000 UTC m=+147.001641354" watchObservedRunningTime="2025-10-06 11:47:39.590396592 +0000 UTC m=+147.003088755" Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.591159 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sf56h" podStartSLOduration=125.591149084 podStartE2EDuration="2m5.591149084s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:39.521067579 +0000 UTC m=+146.933759752" watchObservedRunningTime="2025-10-06 11:47:39.591149084 +0000 UTC m=+147.003841257" Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.594761 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" event={"ID":"68a1c3b6-484b-4230-8a85-19152744b843","Type":"ContainerStarted","Data":"5f777ac9f3643ba91c0edf5bf5a6d0611dee0a5a7d4e2a45196db35b94969e4e"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.637460 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" event={"ID":"a529be58-d9fb-4e76-a2d3-7cafbd5a6829","Type":"ContainerStarted","Data":"1bba86f0bbd61308244896a15125eb8d34070d38c0dfa12acebd7b313f1620ac"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.639392 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:39 crc kubenswrapper[4698]: E1006 11:47:39.639725 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:40.139705386 +0000 UTC m=+147.552397559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.665370 4698 generic.go:334] "Generic (PLEG): container finished" podID="ad9f70e1-77ed-474a-b816-0060897e95bc" containerID="75f119f76d6e444cf6e08fd8295c301e5ff324bc52ba52dff1ed8cbce8faf577" exitCode=0 Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.665476 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" event={"ID":"ad9f70e1-77ed-474a-b816-0060897e95bc","Type":"ContainerDied","Data":"75f119f76d6e444cf6e08fd8295c301e5ff324bc52ba52dff1ed8cbce8faf577"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.669551 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" event={"ID":"dcdad66d-8a87-4f84-99d4-a6380a737895","Type":"ContainerStarted","Data":"676837dd5c78c946f469f88f72a484dd2abc6788d37177fa6dce44fd27c83ceb"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.690815 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" event={"ID":"5d96fa0d-a03c-44ca-827b-cd0cc390f5a4","Type":"ContainerStarted","Data":"795000d07b621b2e9988de18a662e9a7062c789a071cec841eefe3055c32fbd5"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.702946 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" event={"ID":"eef5ed90-dd02-478f-8038-4970199b1cac","Type":"ContainerStarted","Data":"b8e18de501c3a1dd0a950322f8f987d949666b5b3c5153a008e3df2b40fa634d"} Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.721139 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9vpb5" podStartSLOduration=125.721111103 podStartE2EDuration="2m5.721111103s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:39.630902135 +0000 UTC m=+147.043594308" watchObservedRunningTime="2025-10-06 11:47:39.721111103 +0000 UTC m=+147.133803286" Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.732631 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zn2x5" Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.749156 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:39 crc kubenswrapper[4698]: E1006 11:47:39.753922 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:40.253899176 +0000 UTC m=+147.666591349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.779906 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xgsdk" podStartSLOduration=125.779868025 podStartE2EDuration="2m5.779868025s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:39.775199852 +0000 UTC m=+147.187892045" watchObservedRunningTime="2025-10-06 11:47:39.779868025 +0000 UTC m=+147.192560198" Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.851144 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:39 crc kubenswrapper[4698]: E1006 11:47:39.855135 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:40.355108036 +0000 UTC m=+147.767800209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.904650 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8lfpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:47:39 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 06 11:47:39 crc kubenswrapper[4698]: [+]process-running ok Oct 06 11:47:39 crc kubenswrapper[4698]: healthz check failed Oct 06 11:47:39 crc kubenswrapper[4698]: I1006 11:47:39.904727 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8lfpm" podUID="dece9d7f-879d-44ed-8264-a0ba4788e4e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.000459 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.000937 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:40.500921197 +0000 UTC m=+147.913613370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.101752 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.102195 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:40.602174059 +0000 UTC m=+148.014866232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.203484 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.204210 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:40.704186102 +0000 UTC m=+148.116878275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.305049 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.305464 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:40.805440294 +0000 UTC m=+148.218132467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.407337 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.408347 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:40.908318243 +0000 UTC m=+148.321010416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.512736 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.513141 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.013088804 +0000 UTC m=+148.425780977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.614431 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.615123 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.115095507 +0000 UTC m=+148.527787680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.721215 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.721879 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.221844375 +0000 UTC m=+148.634536538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.751222 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" event={"ID":"0116b858-1992-495a-8522-457552954e56","Type":"ContainerStarted","Data":"62b7abc62f47b48c6e10bf9986f2e278928cd0d613c4b372e2c7377f14676370"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.753633 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" event={"ID":"51b3ecb4-5e79-4fda-963b-a968c5274189","Type":"ContainerStarted","Data":"fe3dcf04392218319c6be267bfd95e159d5a4c35aef3399fbd5d3d0549f10b21"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.753706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" event={"ID":"51b3ecb4-5e79-4fda-963b-a968c5274189","Type":"ContainerStarted","Data":"5e125f424161d4f04f22bd966e1fa3277614fa7c8d89abebe88a991ac921dc97"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.754636 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.757950 4698 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-btwhx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.758033 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" podUID="51b3ecb4-5e79-4fda-963b-a968c5274189" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.798791 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pqn5k" podStartSLOduration=126.798769665 podStartE2EDuration="2m6.798769665s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:40.796438809 +0000 UTC m=+148.209130982" watchObservedRunningTime="2025-10-06 11:47:40.798769665 +0000 UTC m=+148.211461838" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.802500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" event={"ID":"1279167d-d379-4629-985d-d16d070765ab","Type":"ContainerStarted","Data":"0b644feeb58915947804d20ef7d0e104d8c381e30dac45c621d7c4007b4d3ae4"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.802581 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" event={"ID":"1279167d-d379-4629-985d-d16d070765ab","Type":"ContainerStarted","Data":"367131b8284ef33e1e6c1b05890399ddd5f3b57211b2e477386f22bcea9ec8b6"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.823186 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.823751 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.323714775 +0000 UTC m=+148.736406948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.838134 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" podStartSLOduration=126.838117195 podStartE2EDuration="2m6.838117195s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:40.83720695 +0000 UTC m=+148.249899123" watchObservedRunningTime="2025-10-06 11:47:40.838117195 +0000 UTC m=+148.250809368" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.843619 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" event={"ID":"c51a9b0f-7c30-4d46-8b1c-f248ce31b955","Type":"ContainerStarted","Data":"7b12346d43398709fb0a2cef9f536205d41f55a7bccee09c13d81c2feb6d96ab"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.875595 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" event={"ID":"e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f","Type":"ContainerStarted","Data":"be86facb74ca8177a6986365c94779423e38a48d27526c2fbb7c09c1ccb483c4"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.875834 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.879542 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8lfpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:47:40 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 06 11:47:40 crc kubenswrapper[4698]: [+]process-running ok Oct 06 11:47:40 crc kubenswrapper[4698]: healthz check failed Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.879612 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8lfpm" podUID="dece9d7f-879d-44ed-8264-a0ba4788e4e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.880091 4698 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qjmrn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.880118 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" podUID="e80e690c-e7b6-43b5-ab6c-625b2ddc3a6f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.909062 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bxj5s" event={"ID":"8096a6dd-e8c3-4df8-9cde-0b5b9d4c320e","Type":"ContainerStarted","Data":"a70e14d51c75748fe136aaa0f4476462632b9390b062025604f7b4b9628f1714"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.923486 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-whpq8" event={"ID":"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1","Type":"ContainerStarted","Data":"0682e293f0c84149a90abcdf3ce44db149bdfaf35f1086bb89dea00b23d135a1"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.924850 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.925069 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.42504707 +0000 UTC m=+148.837739243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.927246 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:40 crc kubenswrapper[4698]: E1006 11:47:40.927811 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.427787508 +0000 UTC m=+148.840479681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.938745 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" podStartSLOduration=126.938723378 podStartE2EDuration="2m6.938723378s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:40.934146978 +0000 UTC m=+148.346839141" watchObservedRunningTime="2025-10-06 11:47:40.938723378 +0000 UTC m=+148.351415551" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.938971 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5cm97" podStartSLOduration=126.938963736 podStartE2EDuration="2m6.938963736s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:40.900077519 +0000 UTC m=+148.312769682" watchObservedRunningTime="2025-10-06 11:47:40.938963736 +0000 UTC m=+148.351655909" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.949924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" event={"ID":"bb98fe0e-cb74-471a-b7c7-1430c86e64b8","Type":"ContainerStarted","Data":"acd87d044799811f5f36f62252297928e2dd13755e935098b292bf48c58bfaef"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.950001 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" event={"ID":"bb98fe0e-cb74-471a-b7c7-1430c86e64b8","Type":"ContainerStarted","Data":"f00912f1f135e6d87d4961cc0b4cfd58286a41e36465ac8bf91d2f52311eae64"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.962072 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2j94r" podStartSLOduration=126.962052563 podStartE2EDuration="2m6.962052563s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:40.960393776 +0000 UTC m=+148.373085949" watchObservedRunningTime="2025-10-06 11:47:40.962052563 +0000 UTC m=+148.374744726" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.962728 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" event={"ID":"b6a6c7dd-61b1-4609-a50d-bba142afd5f6","Type":"ContainerStarted","Data":"aa9fa273109015c2db531f14de4d03dd9dd501a3327d64d38d54177a2d88fa72"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.976322 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" event={"ID":"2ecabdc0-bd56-4f58-b619-32c52a2ade73","Type":"ContainerStarted","Data":"16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.976424 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" event={"ID":"2ecabdc0-bd56-4f58-b619-32c52a2ade73","Type":"ContainerStarted","Data":"a797b2c91ebe7b95f6cc9e1152947b7a1aa4d645411b692739aa2b649a3c5f0b"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.977230 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.978347 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-69fdx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.978400 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" podUID="2ecabdc0-bd56-4f58-b619-32c52a2ade73" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.987515 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" event={"ID":"e96bd49d-b945-43be-8811-999cf2a20e20","Type":"ContainerStarted","Data":"40f960c3de22c303430e0b9f7eea137f42e3b1ffb2f815d585c2ef8b22d9f209"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.987577 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" event={"ID":"e96bd49d-b945-43be-8811-999cf2a20e20","Type":"ContainerStarted","Data":"eb790b83bdd92807736a31babc368de07cb77f29c391b2716959e55902070e31"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.988597 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bxj5s" podStartSLOduration=7.988568918 podStartE2EDuration="7.988568918s" podCreationTimestamp="2025-10-06 11:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:40.98725079 +0000 UTC m=+148.399942963" watchObservedRunningTime="2025-10-06 11:47:40.988568918 +0000 UTC m=+148.401261081" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.991811 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" event={"ID":"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2","Type":"ContainerStarted","Data":"213d0f934b0205da4b5993ecf395b9a82a1d323205669605973a40461a462f47"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.991852 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" event={"ID":"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2","Type":"ContainerStarted","Data":"a9ab363f9437c6c5190cda410ad14d826b5b62c6d3bd4cff0351405663019d9a"} Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.992585 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:40 crc kubenswrapper[4698]: I1006 11:47:40.997216 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" event={"ID":"1d3c1cd3-75a3-4bdc-ae56-b0990e0747fb","Type":"ContainerStarted","Data":"277385aa77a68ba67e4ef7458fbaebffc82cd04e1b0e7618d33c490e6e722107"} Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.002196 4698 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9bcwc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.002265 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" podUID="c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.003851 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" event={"ID":"a529be58-d9fb-4e76-a2d3-7cafbd5a6829","Type":"ContainerStarted","Data":"ddf2374091fb40c0e0aa0d1662e89941da5b0902deda5a988a4151661ee1fdc0"} Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.003921 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" event={"ID":"a529be58-d9fb-4e76-a2d3-7cafbd5a6829","Type":"ContainerStarted","Data":"5422a8d5b1f355000e1797cf140a42dee640b6bf57cefb9ac2ef853dd9ae7dcc"} Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.014932 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" event={"ID":"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07","Type":"ContainerStarted","Data":"9cf4557a6b1f5bf3d310f6b9e6eaed79b587df000013804d23c4fec20a1b4fbb"} Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.015005 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" event={"ID":"e1fc648b-f253-4be1-b2dc-e7d86ad8fc07","Type":"ContainerStarted","Data":"98f79cfc8cb5a439cee99c2d55117f4ca6da835285ecffd8581841710e8c50db"} Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.018259 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.019750 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" podStartSLOduration=127.019723124 podStartE2EDuration="2m7.019723124s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:41.017724188 +0000 UTC m=+148.430416361" watchObservedRunningTime="2025-10-06 11:47:41.019723124 +0000 UTC m=+148.432415297" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.030465 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.032061 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.532033435 +0000 UTC m=+148.944725608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.044798 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" event={"ID":"a0a30575-9201-4f0c-8ca4-c651b7a72151","Type":"ContainerStarted","Data":"8beded8fa99dbb21275481ee1056d80370eeae73e0999bc2f12c93e878e59444"} Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.046131 4698 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-q974b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.046187 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" podUID="e1fc648b-f253-4be1-b2dc-e7d86ad8fc07" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.083208 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" event={"ID":"e7c473e8-5028-4727-b307-00e23db260e5","Type":"ContainerStarted","Data":"e17497a9e62762094dde5b5fedf6f69bf235ffc5746648397622e6947b0c5923"} Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.083615 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" event={"ID":"e7c473e8-5028-4727-b307-00e23db260e5","Type":"ContainerStarted","Data":"3b7218995e5ff90ee6b3fca19deaee92976188b53915ed4e9b960f820b27c787"} Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.088730 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" podStartSLOduration=127.088706937 podStartE2EDuration="2m7.088706937s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:41.060619138 +0000 UTC m=+148.473311311" watchObservedRunningTime="2025-10-06 11:47:41.088706937 +0000 UTC m=+148.501399110" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.092373 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" event={"ID":"eef5ed90-dd02-478f-8038-4970199b1cac","Type":"ContainerStarted","Data":"cf2765dce1d76db3993ad0fe40b08c727e15a5612e5cc82712a5663ccf99ce18"} Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.092439 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" event={"ID":"eef5ed90-dd02-478f-8038-4970199b1cac","Type":"ContainerStarted","Data":"c96169a3224c4433e56a7399c90a0909c8d105f48a81d5dce705a45325c3bdcc"} Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.120267 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9npqg" podStartSLOduration=127.120239885 podStartE2EDuration="2m7.120239885s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:41.089556732 +0000 UTC m=+148.502248905" watchObservedRunningTime="2025-10-06 11:47:41.120239885 +0000 UTC m=+148.532932068" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.132089 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.132238 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.132326 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.132985 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.133103 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.136802 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.636782716 +0000 UTC m=+149.049474889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.138184 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.143681 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.145094 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.148964 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.152934 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.156810 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" podStartSLOduration=127.156776245 podStartE2EDuration="2m7.156776245s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:41.118758583 +0000 UTC m=+148.531450766" watchObservedRunningTime="2025-10-06 11:47:41.156776245 +0000 UTC m=+148.569468418" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.158747 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-jndpr" podStartSLOduration=127.157166526 podStartE2EDuration="2m7.157166526s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:41.154770278 +0000 UTC m=+148.567462471" watchObservedRunningTime="2025-10-06 11:47:41.157166526 +0000 UTC m=+148.569858689" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.162944 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.171473 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.238452 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" podStartSLOduration=127.238422348 podStartE2EDuration="2m7.238422348s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:41.192390889 +0000 UTC m=+148.605083062" watchObservedRunningTime="2025-10-06 11:47:41.238422348 +0000 UTC m=+148.651114521" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.241180 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.242660 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.742634498 +0000 UTC m=+149.155326671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.320021 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qmf88" podStartSLOduration=127.319983191 podStartE2EDuration="2m7.319983191s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:41.310393558 +0000 UTC m=+148.723085731" watchObservedRunningTime="2025-10-06 11:47:41.319983191 +0000 UTC m=+148.732675364" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.320963 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" podStartSLOduration=127.320958168 podStartE2EDuration="2m7.320958168s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:41.243885694 +0000 UTC m=+148.656577867" watchObservedRunningTime="2025-10-06 11:47:41.320958168 +0000 UTC m=+148.733650341" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.343701 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.344147 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.844131338 +0000 UTC m=+149.256823511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.372693 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cdwhw" podStartSLOduration=127.37266882 podStartE2EDuration="2m7.37266882s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:41.36988281 +0000 UTC m=+148.782574983" watchObservedRunningTime="2025-10-06 11:47:41.37266882 +0000 UTC m=+148.785360993" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.405348 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-6dhbx" podStartSLOduration=127.40533019 podStartE2EDuration="2m7.40533019s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:41.402646893 +0000 UTC m=+148.815339076" watchObservedRunningTime="2025-10-06 11:47:41.40533019 +0000 UTC m=+148.818022363" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.446745 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.447132 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:41.947110098 +0000 UTC m=+149.359802271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.548337 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.549264 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.049237245 +0000 UTC m=+149.461929498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.649895 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.650371 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.150346984 +0000 UTC m=+149.563039157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.750985 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.751330 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.251315707 +0000 UTC m=+149.664007880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.852892 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.853459 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.353436784 +0000 UTC m=+149.766128957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.853682 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.854251 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.354229796 +0000 UTC m=+149.766921959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.877451 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8lfpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:47:41 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 06 11:47:41 crc kubenswrapper[4698]: [+]process-running ok Oct 06 11:47:41 crc kubenswrapper[4698]: healthz check failed Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.877505 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8lfpm" podUID="dece9d7f-879d-44ed-8264-a0ba4788e4e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:47:41 crc kubenswrapper[4698]: I1006 11:47:41.957155 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:41 crc kubenswrapper[4698]: E1006 11:47:41.957630 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.457606498 +0000 UTC m=+149.870298671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:41 crc kubenswrapper[4698]: W1006 11:47:41.998322 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-f9ae931937d805f018caeb1a3f1bf588bd356fa280c28f0daf20bafb60d34fe4 WatchSource:0}: Error finding container f9ae931937d805f018caeb1a3f1bf588bd356fa280c28f0daf20bafb60d34fe4: Status 404 returned error can't find the container with id f9ae931937d805f018caeb1a3f1bf588bd356fa280c28f0daf20bafb60d34fe4 Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.060566 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.074150 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.574120495 +0000 UTC m=+149.986812668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.110773 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"987d483b5dc5f5f566ed5585038a2826430b1be1bc4beb243627aadc33a6e71a"} Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.110837 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e18abeae9bf98a7335e90d44d57a8cb20abdc21cc24c7e18b546e48fef3c1425"} Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.117170 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"216a80c60a82d2f5b5c21f10cde31d7d0afacd7ec81f2f50ab907e234c75e197"} Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.127656 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" event={"ID":"e96bd49d-b945-43be-8811-999cf2a20e20","Type":"ContainerStarted","Data":"2a257e705529d4d842564b216ae40aafc379fa59994d03cf2ee61a95aa64a934"} Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.128410 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.162092 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" podStartSLOduration=128.162069588 podStartE2EDuration="2m8.162069588s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:42.160504934 +0000 UTC m=+149.573197107" watchObservedRunningTime="2025-10-06 11:47:42.162069588 +0000 UTC m=+149.574761761" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.164255 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.165138 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.665114374 +0000 UTC m=+150.077806547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.168211 4698 generic.go:334] "Generic (PLEG): container finished" podID="b6a6c7dd-61b1-4609-a50d-bba142afd5f6" containerID="aa9fa273109015c2db531f14de4d03dd9dd501a3327d64d38d54177a2d88fa72" exitCode=0 Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.168331 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" event={"ID":"b6a6c7dd-61b1-4609-a50d-bba142afd5f6","Type":"ContainerDied","Data":"aa9fa273109015c2db531f14de4d03dd9dd501a3327d64d38d54177a2d88fa72"} Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.201182 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" event={"ID":"ad9f70e1-77ed-474a-b816-0060897e95bc","Type":"ContainerStarted","Data":"d45486e6f186d204bc9df3c0ef2501b21ac00637c2a88b34c3e8aa90794dedaf"} Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.207432 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" event={"ID":"dcdad66d-8a87-4f84-99d4-a6380a737895","Type":"ContainerStarted","Data":"9723d0088d125f171f820b53918ca97654ddf9c5f842a4abe17f0fcf36bee0b8"} Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.224453 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-whpq8" event={"ID":"7e6183d4-7cb1-42a1-bebb-d6d4a264e2e1","Type":"ContainerStarted","Data":"1ffa9612512de784bee2e0addcdb2f647bb90072cb2cf37893c59dac7ba661c2"} Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.225482 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.226610 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" podStartSLOduration=128.226588824 podStartE2EDuration="2m8.226588824s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:42.225280417 +0000 UTC m=+149.637972580" watchObservedRunningTime="2025-10-06 11:47:42.226588824 +0000 UTC m=+149.639280997" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.233548 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f9ae931937d805f018caeb1a3f1bf588bd356fa280c28f0daf20bafb60d34fe4"} Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.238368 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-69fdx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.238436 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" podUID="2ecabdc0-bd56-4f58-b619-32c52a2ade73" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.256096 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-whpq8" podStartSLOduration=9.256074993 podStartE2EDuration="9.256074993s" podCreationTimestamp="2025-10-06 11:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:42.254078887 +0000 UTC m=+149.666771060" watchObservedRunningTime="2025-10-06 11:47:42.256074993 +0000 UTC m=+149.668767156" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.258628 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-btwhx" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.258958 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q974b" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.269120 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.271541 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.771525863 +0000 UTC m=+150.184218036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.367164 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.370953 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.373405 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.873384973 +0000 UTC m=+150.286077146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.374567 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.375049 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.875031379 +0000 UTC m=+150.287723552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.397311 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjmrn" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.475663 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.476098 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:42.976074345 +0000 UTC m=+150.388766518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.577520 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.578032 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.077996926 +0000 UTC m=+150.490689099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.621320 4698 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.678900 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.679185 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.179142395 +0000 UTC m=+150.591834568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.679287 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.679706 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.179697561 +0000 UTC m=+150.592389734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.781006 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.781231 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.281184479 +0000 UTC m=+150.693876652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.781305 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.781646 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.281626892 +0000 UTC m=+150.694319075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.875062 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8lfpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:47:42 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 06 11:47:42 crc kubenswrapper[4698]: [+]process-running ok Oct 06 11:47:42 crc kubenswrapper[4698]: healthz check failed Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.875153 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8lfpm" podUID="dece9d7f-879d-44ed-8264-a0ba4788e4e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.882714 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.883094 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.383057629 +0000 UTC m=+150.795749802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.883233 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.883637 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.383620045 +0000 UTC m=+150.796312218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.984527 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.984745 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.484707633 +0000 UTC m=+150.897399806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:42 crc kubenswrapper[4698]: I1006 11:47:42.984837 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:42 crc kubenswrapper[4698]: E1006 11:47:42.985183 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.485174926 +0000 UTC m=+150.897867099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.086570 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:43 crc kubenswrapper[4698]: E1006 11:47:43.086820 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.586785537 +0000 UTC m=+150.999477710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.086900 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:43 crc kubenswrapper[4698]: E1006 11:47:43.087285 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.587269001 +0000 UTC m=+150.999961174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.188684 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:43 crc kubenswrapper[4698]: E1006 11:47:43.188938 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.688886523 +0000 UTC m=+151.101578696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.189027 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:43 crc kubenswrapper[4698]: E1006 11:47:43.189367 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.689350587 +0000 UTC m=+151.102042760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.240368 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d200e9440bb7ae1180c4546a1c9490fa20c6f13f217305c34eb894e6f2dae8b6"} Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.242001 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"10a222556057b3317fe858d4b8320c8e77387d869c94a1710942bc3a9173a4b0"} Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.242402 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.246200 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" event={"ID":"dcdad66d-8a87-4f84-99d4-a6380a737895","Type":"ContainerStarted","Data":"3bec3f258f2fd86a794b76ef061cb9a88ce939e37590fdfbf9b294090ea11eb2"} Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.253063 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" event={"ID":"dcdad66d-8a87-4f84-99d4-a6380a737895","Type":"ContainerStarted","Data":"0bc7b114ef4c938e083f62adc36057e8fcfa76a391a337bdf76ca585a62da6d1"} Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.246999 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-69fdx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.253146 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" podUID="2ecabdc0-bd56-4f58-b619-32c52a2ade73" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.300815 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:43 crc kubenswrapper[4698]: E1006 11:47:43.302060 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.802037254 +0000 UTC m=+151.214729417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.334082 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wjbf9" podStartSLOduration=10.334056045 podStartE2EDuration="10.334056045s" podCreationTimestamp="2025-10-06 11:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:43.332300775 +0000 UTC m=+150.744992948" watchObservedRunningTime="2025-10-06 11:47:43.334056045 +0000 UTC m=+150.746748238" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.403161 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:43 crc kubenswrapper[4698]: E1006 11:47:43.403577 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 11:47:43.903561834 +0000 UTC m=+151.316254007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9tmxl" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.499298 4698 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-06T11:47:42.6213613Z","Handler":null,"Name":""} Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.506850 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:43 crc kubenswrapper[4698]: E1006 11:47:43.507272 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 11:47:44.007249135 +0000 UTC m=+151.419941308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.510151 4698 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.510201 4698 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.562758 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.608233 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-config-volume\") pod \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.608336 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-secret-volume\") pod \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.608392 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6kbk\" (UniqueName: \"kubernetes.io/projected/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-kube-api-access-w6kbk\") pod \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\" (UID: \"b6a6c7dd-61b1-4609-a50d-bba142afd5f6\") " Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.608782 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.611264 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-config-volume" (OuterVolumeSpecName: "config-volume") pod "b6a6c7dd-61b1-4609-a50d-bba142afd5f6" (UID: "b6a6c7dd-61b1-4609-a50d-bba142afd5f6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.616127 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.616196 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.624401 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-kube-api-access-w6kbk" (OuterVolumeSpecName: "kube-api-access-w6kbk") pod "b6a6c7dd-61b1-4609-a50d-bba142afd5f6" (UID: "b6a6c7dd-61b1-4609-a50d-bba142afd5f6"). InnerVolumeSpecName "kube-api-access-w6kbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.639559 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b6a6c7dd-61b1-4609-a50d-bba142afd5f6" (UID: "b6a6c7dd-61b1-4609-a50d-bba142afd5f6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.663256 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9tmxl\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.710339 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.710739 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.710760 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.710770 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6kbk\" (UniqueName: \"kubernetes.io/projected/b6a6c7dd-61b1-4609-a50d-bba142afd5f6-kube-api-access-w6kbk\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.718598 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.872400 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8lfpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:47:43 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 06 11:47:43 crc kubenswrapper[4698]: [+]process-running ok Oct 06 11:47:43 crc kubenswrapper[4698]: healthz check failed Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.872466 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8lfpm" podUID="dece9d7f-879d-44ed-8264-a0ba4788e4e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.883252 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.922546 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r4l5q"] Oct 06 11:47:43 crc kubenswrapper[4698]: E1006 11:47:43.922810 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a6c7dd-61b1-4609-a50d-bba142afd5f6" containerName="collect-profiles" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.922824 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a6c7dd-61b1-4609-a50d-bba142afd5f6" containerName="collect-profiles" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.922917 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6a6c7dd-61b1-4609-a50d-bba142afd5f6" containerName="collect-profiles" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.923752 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.927387 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 06 11:47:43 crc kubenswrapper[4698]: I1006 11:47:43.933908 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4l5q"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.014538 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-catalog-content\") pod \"community-operators-r4l5q\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.015178 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-utilities\") pod \"community-operators-r4l5q\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.015216 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4njd\" (UniqueName: \"kubernetes.io/projected/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-kube-api-access-x4njd\") pod \"community-operators-r4l5q\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.101696 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9tmxl"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.117068 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-utilities\") pod \"community-operators-r4l5q\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.117117 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4njd\" (UniqueName: \"kubernetes.io/projected/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-kube-api-access-x4njd\") pod \"community-operators-r4l5q\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.117158 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-catalog-content\") pod \"community-operators-r4l5q\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.118121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-catalog-content\") pod \"community-operators-r4l5q\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.118175 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-utilities\") pod \"community-operators-r4l5q\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.123369 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hj2gc"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.125124 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.127718 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.147103 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hj2gc"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.155833 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4njd\" (UniqueName: \"kubernetes.io/projected/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-kube-api-access-x4njd\") pod \"community-operators-r4l5q\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.221025 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-catalog-content\") pod \"certified-operators-hj2gc\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.221113 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-utilities\") pod \"certified-operators-hj2gc\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.221143 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558gg\" (UniqueName: \"kubernetes.io/projected/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-kube-api-access-558gg\") pod \"certified-operators-hj2gc\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.244862 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.250647 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" event={"ID":"ecbf158d-99db-46c0-84e8-a71879e9f56f","Type":"ContainerStarted","Data":"5ac9497d615353b727252ff84bda533ab49f61603c9f48dc0489c62f53c018d8"} Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.252215 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.252273 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9" event={"ID":"b6a6c7dd-61b1-4609-a50d-bba142afd5f6","Type":"ContainerDied","Data":"334ac383799443ed165a817c13928a0038564ce1995d18bad44d4771d69c225d"} Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.252303 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="334ac383799443ed165a817c13928a0038564ce1995d18bad44d4771d69c225d" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.321167 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-97rch"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.323448 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-catalog-content\") pod \"certified-operators-hj2gc\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.324847 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-utilities\") pod \"certified-operators-hj2gc\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.324876 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-558gg\" (UniqueName: \"kubernetes.io/projected/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-kube-api-access-558gg\") pod \"certified-operators-hj2gc\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.323872 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-catalog-content\") pod \"certified-operators-hj2gc\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.325820 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-utilities\") pod \"certified-operators-hj2gc\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.330964 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.347686 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-97rch"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.350460 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-558gg\" (UniqueName: \"kubernetes.io/projected/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-kube-api-access-558gg\") pod \"certified-operators-hj2gc\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.428162 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-utilities\") pod \"community-operators-97rch\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.428203 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44hfp\" (UniqueName: \"kubernetes.io/projected/94fbe1e9-eab6-45e6-8cee-5df226a88355-kube-api-access-44hfp\") pod \"community-operators-97rch\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.428246 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-catalog-content\") pod \"community-operators-97rch\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.428361 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.429547 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.432029 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.432473 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.464357 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.499344 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.518194 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r4wkc"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.521394 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.529393 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-catalog-content\") pod \"community-operators-97rch\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.529457 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.529512 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-utilities\") pod \"community-operators-97rch\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.529534 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44hfp\" (UniqueName: \"kubernetes.io/projected/94fbe1e9-eab6-45e6-8cee-5df226a88355-kube-api-access-44hfp\") pod \"community-operators-97rch\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.529564 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.530124 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-catalog-content\") pod \"community-operators-97rch\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.530272 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-utilities\") pod \"community-operators-97rch\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.551259 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4wkc"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.574846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44hfp\" (UniqueName: \"kubernetes.io/projected/94fbe1e9-eab6-45e6-8cee-5df226a88355-kube-api-access-44hfp\") pod \"community-operators-97rch\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.628789 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4l5q"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.630805 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-utilities\") pod \"certified-operators-r4wkc\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.630887 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.630912 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdcxf\" (UniqueName: \"kubernetes.io/projected/f3910193-5a78-4c80-9eb8-8f05beb54b2f-kube-api-access-hdcxf\") pod \"certified-operators-r4wkc\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.630959 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-catalog-content\") pod \"certified-operators-r4wkc\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.630978 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.631089 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.651715 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97rch" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.662987 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.732526 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-catalog-content\") pod \"certified-operators-r4wkc\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.732820 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-utilities\") pod \"certified-operators-r4wkc\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.732865 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdcxf\" (UniqueName: \"kubernetes.io/projected/f3910193-5a78-4c80-9eb8-8f05beb54b2f-kube-api-access-hdcxf\") pod \"certified-operators-r4wkc\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.733901 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-utilities\") pod \"certified-operators-r4wkc\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.737348 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-catalog-content\") pod \"certified-operators-r4wkc\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.764154 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdcxf\" (UniqueName: \"kubernetes.io/projected/f3910193-5a78-4c80-9eb8-8f05beb54b2f-kube-api-access-hdcxf\") pod \"certified-operators-r4wkc\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.795584 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.871176 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8lfpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:47:44 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 06 11:47:44 crc kubenswrapper[4698]: [+]process-running ok Oct 06 11:47:44 crc kubenswrapper[4698]: healthz check failed Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.871237 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8lfpm" podUID="dece9d7f-879d-44ed-8264-a0ba4788e4e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.906162 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.973879 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-97rch"] Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.974495 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-6qn85 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.974483 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-6qn85 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.974539 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6qn85" podUID="6a709775-a67f-4f9e-813b-03b0089f0ca5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 11:47:44 crc kubenswrapper[4698]: I1006 11:47:44.974540 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6qn85" podUID="6a709775-a67f-4f9e-813b-03b0089f0ca5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.025614 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.025659 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.030254 4698 patch_prober.go:28] interesting pod/console-f9d7485db-dtbvf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.030340 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dtbvf" podUID="dc33924c-840f-497c-ad04-657d6fa573a9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.108551 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hj2gc"] Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.147718 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.147930 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.159218 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.190880 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4wkc"] Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.259474 4698 generic.go:334] "Generic (PLEG): container finished" podID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerID="4d47054248e867788cbb67b99a73e78fb792e395bc1c0bec46581b254cd646c7" exitCode=0 Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.259544 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97rch" event={"ID":"94fbe1e9-eab6-45e6-8cee-5df226a88355","Type":"ContainerDied","Data":"4d47054248e867788cbb67b99a73e78fb792e395bc1c0bec46581b254cd646c7"} Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.259581 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97rch" event={"ID":"94fbe1e9-eab6-45e6-8cee-5df226a88355","Type":"ContainerStarted","Data":"750a9227acdd9a1b4f14ecd410a81ff0162174ebcafb78d238df43ca1dd48835"} Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.261564 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.262689 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj2gc" event={"ID":"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f","Type":"ContainerStarted","Data":"3638b0470eb123828f4a0a89c4410df2ee57baf4686391958ce095d3863a52af"} Oct 06 11:47:45 crc kubenswrapper[4698]: W1006 11:47:45.263392 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3910193_5a78_4c80_9eb8_8f05beb54b2f.slice/crio-8fd2fa207bd34ba998fdfb2b1863a37cac3cf903645b6081fcd88587abb07d4e WatchSource:0}: Error finding container 8fd2fa207bd34ba998fdfb2b1863a37cac3cf903645b6081fcd88587abb07d4e: Status 404 returned error can't find the container with id 8fd2fa207bd34ba998fdfb2b1863a37cac3cf903645b6081fcd88587abb07d4e Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.264938 4698 generic.go:334] "Generic (PLEG): container finished" podID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerID="7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90" exitCode=0 Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.264993 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l5q" event={"ID":"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d","Type":"ContainerDied","Data":"7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90"} Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.265043 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l5q" event={"ID":"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d","Type":"ContainerStarted","Data":"f78642e88eac7dfa75730dcd0353c186605818f9dd7ac06eb234e87b0018c94c"} Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.269242 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" event={"ID":"ecbf158d-99db-46c0-84e8-a71879e9f56f","Type":"ContainerStarted","Data":"5498b90298218554ccf878db4bca9944b35e346a4e435a7e87d7940d5e748bda"} Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.269663 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.278442 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-t96dq" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.295824 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 11:47:45 crc kubenswrapper[4698]: W1006 11:47:45.317177 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcd42e4f5_72c4_4d35_920c_1ddbbc3d1851.slice/crio-3fc7226851384775c3772984ea096dff2484e6fb613f6903b5115f93ec98ecb4 WatchSource:0}: Error finding container 3fc7226851384775c3772984ea096dff2484e6fb613f6903b5115f93ec98ecb4: Status 404 returned error can't find the container with id 3fc7226851384775c3772984ea096dff2484e6fb613f6903b5115f93ec98ecb4 Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.370675 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.417837 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" podStartSLOduration=131.417819444 podStartE2EDuration="2m11.417819444s" podCreationTimestamp="2025-10-06 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:45.416899568 +0000 UTC m=+152.829591741" watchObservedRunningTime="2025-10-06 11:47:45.417819444 +0000 UTC m=+152.830511617" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.774365 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.774934 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.780986 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.868518 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.875002 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8lfpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 11:47:45 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 06 11:47:45 crc kubenswrapper[4698]: [+]process-running ok Oct 06 11:47:45 crc kubenswrapper[4698]: healthz check failed Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.875601 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8lfpm" podUID="dece9d7f-879d-44ed-8264-a0ba4788e4e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.915521 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ld4fp"] Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.916677 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.925354 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.938199 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ld4fp"] Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.968655 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-catalog-content\") pod \"redhat-marketplace-ld4fp\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.968961 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-utilities\") pod \"redhat-marketplace-ld4fp\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:45 crc kubenswrapper[4698]: I1006 11:47:45.969153 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdlqg\" (UniqueName: \"kubernetes.io/projected/6bd2e241-8c70-44a5-bd89-b7bd4523640e-kube-api-access-fdlqg\") pod \"redhat-marketplace-ld4fp\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.070196 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-catalog-content\") pod \"redhat-marketplace-ld4fp\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.070289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-utilities\") pod \"redhat-marketplace-ld4fp\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.070339 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdlqg\" (UniqueName: \"kubernetes.io/projected/6bd2e241-8c70-44a5-bd89-b7bd4523640e-kube-api-access-fdlqg\") pod \"redhat-marketplace-ld4fp\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.070710 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-catalog-content\") pod \"redhat-marketplace-ld4fp\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.070843 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-utilities\") pod \"redhat-marketplace-ld4fp\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.105391 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdlqg\" (UniqueName: \"kubernetes.io/projected/6bd2e241-8c70-44a5-bd89-b7bd4523640e-kube-api-access-fdlqg\") pod \"redhat-marketplace-ld4fp\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.237403 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.313346 4698 generic.go:334] "Generic (PLEG): container finished" podID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerID="2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96" exitCode=0 Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.313642 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4wkc" event={"ID":"f3910193-5a78-4c80-9eb8-8f05beb54b2f","Type":"ContainerDied","Data":"2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96"} Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.314007 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4wkc" event={"ID":"f3910193-5a78-4c80-9eb8-8f05beb54b2f","Type":"ContainerStarted","Data":"8fd2fa207bd34ba998fdfb2b1863a37cac3cf903645b6081fcd88587abb07d4e"} Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.317812 4698 generic.go:334] "Generic (PLEG): container finished" podID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerID="acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a" exitCode=0 Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.319790 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj2gc" event={"ID":"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f","Type":"ContainerDied","Data":"acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a"} Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.329350 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wzpgq"] Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.330477 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.341413 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzpgq"] Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.358151 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851","Type":"ContainerStarted","Data":"a18fa97783bfc8c05bc315038092fb308e52e64d99a492abf4daa61448c3fea2"} Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.358535 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851","Type":"ContainerStarted","Data":"3fc7226851384775c3772984ea096dff2484e6fb613f6903b5115f93ec98ecb4"} Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.367355 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6vgpq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.377097 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-catalog-content\") pod \"redhat-marketplace-wzpgq\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.377143 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-utilities\") pod \"redhat-marketplace-wzpgq\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.377217 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbfb\" (UniqueName: \"kubernetes.io/projected/5f73aecf-4753-4041-89d3-2df17062b304-kube-api-access-jrbfb\") pod \"redhat-marketplace-wzpgq\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.461601 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.4615709519999998 podStartE2EDuration="2.461570952s" podCreationTimestamp="2025-10-06 11:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:46.435845639 +0000 UTC m=+153.848537812" watchObservedRunningTime="2025-10-06 11:47:46.461570952 +0000 UTC m=+153.874263125" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.478433 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-catalog-content\") pod \"redhat-marketplace-wzpgq\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.478495 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-utilities\") pod \"redhat-marketplace-wzpgq\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.478565 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrbfb\" (UniqueName: \"kubernetes.io/projected/5f73aecf-4753-4041-89d3-2df17062b304-kube-api-access-jrbfb\") pod \"redhat-marketplace-wzpgq\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.481813 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-utilities\") pod \"redhat-marketplace-wzpgq\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.482072 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-catalog-content\") pod \"redhat-marketplace-wzpgq\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.521105 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrbfb\" (UniqueName: \"kubernetes.io/projected/5f73aecf-4753-4041-89d3-2df17062b304-kube-api-access-jrbfb\") pod \"redhat-marketplace-wzpgq\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.562163 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.661123 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.794677 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ld4fp"] Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.873779 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.880628 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-8lfpm" Oct 06 11:47:46 crc kubenswrapper[4698]: I1006 11:47:46.937770 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzpgq"] Oct 06 11:47:46 crc kubenswrapper[4698]: W1006 11:47:46.953927 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f73aecf_4753_4041_89d3_2df17062b304.slice/crio-60a34d0924f99a3ab6614a075e0bbe74875abcd0c2a4c86c832cbba52eeecabf WatchSource:0}: Error finding container 60a34d0924f99a3ab6614a075e0bbe74875abcd0c2a4c86c832cbba52eeecabf: Status 404 returned error can't find the container with id 60a34d0924f99a3ab6614a075e0bbe74875abcd0c2a4c86c832cbba52eeecabf Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.068550 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.069837 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.073381 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.073426 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.078585 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.101344 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.101391 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.202633 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.202710 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.202931 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.224317 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.315592 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ck5qk"] Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.317086 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.319563 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.326113 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ck5qk"] Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.366637 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ld4fp" event={"ID":"6bd2e241-8c70-44a5-bd89-b7bd4523640e","Type":"ContainerStarted","Data":"8818236d842372f82eac2626d9ff1aa241bea85b356d7fe9850068f6cbf778e8"} Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.367815 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzpgq" event={"ID":"5f73aecf-4753-4041-89d3-2df17062b304","Type":"ContainerStarted","Data":"60a34d0924f99a3ab6614a075e0bbe74875abcd0c2a4c86c832cbba52eeecabf"} Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.370221 4698 generic.go:334] "Generic (PLEG): container finished" podID="cd42e4f5-72c4-4d35-920c-1ddbbc3d1851" containerID="a18fa97783bfc8c05bc315038092fb308e52e64d99a492abf4daa61448c3fea2" exitCode=0 Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.370275 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851","Type":"ContainerDied","Data":"a18fa97783bfc8c05bc315038092fb308e52e64d99a492abf4daa61448c3fea2"} Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.390169 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.408389 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtphv\" (UniqueName: \"kubernetes.io/projected/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-kube-api-access-gtphv\") pod \"redhat-operators-ck5qk\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.410135 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-catalog-content\") pod \"redhat-operators-ck5qk\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.411579 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-utilities\") pod \"redhat-operators-ck5qk\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.515214 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-utilities\") pod \"redhat-operators-ck5qk\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.515303 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtphv\" (UniqueName: \"kubernetes.io/projected/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-kube-api-access-gtphv\") pod \"redhat-operators-ck5qk\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.515344 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-catalog-content\") pod \"redhat-operators-ck5qk\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.515886 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-utilities\") pod \"redhat-operators-ck5qk\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.515958 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-catalog-content\") pod \"redhat-operators-ck5qk\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.536205 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtphv\" (UniqueName: \"kubernetes.io/projected/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-kube-api-access-gtphv\") pod \"redhat-operators-ck5qk\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.644800 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.716535 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ng7sj"] Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.717681 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.727482 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ng7sj"] Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.820666 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-utilities\") pod \"redhat-operators-ng7sj\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.820712 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfz9k\" (UniqueName: \"kubernetes.io/projected/7885934c-f4f7-4615-bab8-4ac47a2eabe6-kube-api-access-dfz9k\") pod \"redhat-operators-ng7sj\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.820741 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-catalog-content\") pod \"redhat-operators-ng7sj\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.847720 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.922521 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-utilities\") pod \"redhat-operators-ng7sj\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.922570 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfz9k\" (UniqueName: \"kubernetes.io/projected/7885934c-f4f7-4615-bab8-4ac47a2eabe6-kube-api-access-dfz9k\") pod \"redhat-operators-ng7sj\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.922597 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-catalog-content\") pod \"redhat-operators-ng7sj\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.922994 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-catalog-content\") pod \"redhat-operators-ng7sj\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.923217 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-utilities\") pod \"redhat-operators-ng7sj\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:47 crc kubenswrapper[4698]: I1006 11:47:47.939819 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfz9k\" (UniqueName: \"kubernetes.io/projected/7885934c-f4f7-4615-bab8-4ac47a2eabe6-kube-api-access-dfz9k\") pod \"redhat-operators-ng7sj\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.048829 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.214996 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ck5qk"] Oct 06 11:47:48 crc kubenswrapper[4698]: W1006 11:47:48.271893 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d3b4a5d_7af6_45dc_b7b2_bf61d415808a.slice/crio-9fd407c54f6d07f164afb9949edb632990943d85e601967d3b821dc943df1e04 WatchSource:0}: Error finding container 9fd407c54f6d07f164afb9949edb632990943d85e601967d3b821dc943df1e04: Status 404 returned error can't find the container with id 9fd407c54f6d07f164afb9949edb632990943d85e601967d3b821dc943df1e04 Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.381302 4698 generic.go:334] "Generic (PLEG): container finished" podID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerID="243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca" exitCode=0 Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.381468 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ld4fp" event={"ID":"6bd2e241-8c70-44a5-bd89-b7bd4523640e","Type":"ContainerDied","Data":"243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca"} Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.391500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck5qk" event={"ID":"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a","Type":"ContainerStarted","Data":"9fd407c54f6d07f164afb9949edb632990943d85e601967d3b821dc943df1e04"} Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.407357 4698 generic.go:334] "Generic (PLEG): container finished" podID="5f73aecf-4753-4041-89d3-2df17062b304" containerID="8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3" exitCode=0 Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.407454 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzpgq" event={"ID":"5f73aecf-4753-4041-89d3-2df17062b304","Type":"ContainerDied","Data":"8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3"} Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.411069 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d","Type":"ContainerStarted","Data":"2db4320c72715675213ca28ae661358151ae8d77d3e8fa07c7f5f2d495f7fd99"} Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.411219 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d","Type":"ContainerStarted","Data":"30e7b0af5708ffa52414403d2e8a8dda5b9f81b8604c41bf3bf4a17ec97a856e"} Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.413656 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ng7sj"] Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.448081 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.448062072 podStartE2EDuration="1.448062072s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:47:48.442421121 +0000 UTC m=+155.855113294" watchObservedRunningTime="2025-10-06 11:47:48.448062072 +0000 UTC m=+155.860754245" Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.726139 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.837364 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kubelet-dir\") pod \"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851\" (UID: \"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851\") " Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.837826 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kube-api-access\") pod \"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851\" (UID: \"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851\") " Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.837517 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cd42e4f5-72c4-4d35-920c-1ddbbc3d1851" (UID: "cd42e4f5-72c4-4d35-920c-1ddbbc3d1851"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.838669 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.866381 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cd42e4f5-72c4-4d35-920c-1ddbbc3d1851" (UID: "cd42e4f5-72c4-4d35-920c-1ddbbc3d1851"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:48 crc kubenswrapper[4698]: I1006 11:47:48.952173 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd42e4f5-72c4-4d35-920c-1ddbbc3d1851-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:49 crc kubenswrapper[4698]: I1006 11:47:49.424611 4698 generic.go:334] "Generic (PLEG): container finished" podID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerID="86c8821214e9c5c044c610e74a31047cadc5b4add2851a5def0244ac01f8a606" exitCode=0 Oct 06 11:47:49 crc kubenswrapper[4698]: I1006 11:47:49.424740 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng7sj" event={"ID":"7885934c-f4f7-4615-bab8-4ac47a2eabe6","Type":"ContainerDied","Data":"86c8821214e9c5c044c610e74a31047cadc5b4add2851a5def0244ac01f8a606"} Oct 06 11:47:49 crc kubenswrapper[4698]: I1006 11:47:49.424820 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng7sj" event={"ID":"7885934c-f4f7-4615-bab8-4ac47a2eabe6","Type":"ContainerStarted","Data":"af2057a7dd492e7b72f4553477dd95d88d7fc1cbea4abe69cc84e7748da963ac"} Oct 06 11:47:49 crc kubenswrapper[4698]: I1006 11:47:49.431383 4698 generic.go:334] "Generic (PLEG): container finished" podID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerID="5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f" exitCode=0 Oct 06 11:47:49 crc kubenswrapper[4698]: I1006 11:47:49.431579 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck5qk" event={"ID":"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a","Type":"ContainerDied","Data":"5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f"} Oct 06 11:47:49 crc kubenswrapper[4698]: I1006 11:47:49.433878 4698 generic.go:334] "Generic (PLEG): container finished" podID="4f2e4e2a-2d93-4a32-a6dc-f7540052e87d" containerID="2db4320c72715675213ca28ae661358151ae8d77d3e8fa07c7f5f2d495f7fd99" exitCode=0 Oct 06 11:47:49 crc kubenswrapper[4698]: I1006 11:47:49.433943 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d","Type":"ContainerDied","Data":"2db4320c72715675213ca28ae661358151ae8d77d3e8fa07c7f5f2d495f7fd99"} Oct 06 11:47:49 crc kubenswrapper[4698]: I1006 11:47:49.440801 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cd42e4f5-72c4-4d35-920c-1ddbbc3d1851","Type":"ContainerDied","Data":"3fc7226851384775c3772984ea096dff2484e6fb613f6903b5115f93ec98ecb4"} Oct 06 11:47:49 crc kubenswrapper[4698]: I1006 11:47:49.440836 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fc7226851384775c3772984ea096dff2484e6fb613f6903b5115f93ec98ecb4" Oct 06 11:47:49 crc kubenswrapper[4698]: I1006 11:47:49.440906 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 11:47:51 crc kubenswrapper[4698]: I1006 11:47:51.295661 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-whpq8" Oct 06 11:47:54 crc kubenswrapper[4698]: I1006 11:47:54.563245 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:47:54 crc kubenswrapper[4698]: I1006 11:47:54.652983 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kube-api-access\") pod \"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d\" (UID: \"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d\") " Oct 06 11:47:54 crc kubenswrapper[4698]: I1006 11:47:54.653214 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kubelet-dir\") pod \"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d\" (UID: \"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d\") " Oct 06 11:47:54 crc kubenswrapper[4698]: I1006 11:47:54.653712 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4f2e4e2a-2d93-4a32-a6dc-f7540052e87d" (UID: "4f2e4e2a-2d93-4a32-a6dc-f7540052e87d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:47:54 crc kubenswrapper[4698]: I1006 11:47:54.661637 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4f2e4e2a-2d93-4a32-a6dc-f7540052e87d" (UID: "4f2e4e2a-2d93-4a32-a6dc-f7540052e87d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:47:54 crc kubenswrapper[4698]: I1006 11:47:54.754819 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:54 crc kubenswrapper[4698]: I1006 11:47:54.754852 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f2e4e2a-2d93-4a32-a6dc-f7540052e87d-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 11:47:54 crc kubenswrapper[4698]: I1006 11:47:54.993355 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6qn85" Oct 06 11:47:55 crc kubenswrapper[4698]: I1006 11:47:55.032732 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:55 crc kubenswrapper[4698]: I1006 11:47:55.043801 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:47:55 crc kubenswrapper[4698]: I1006 11:47:55.235192 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:47:55 crc kubenswrapper[4698]: I1006 11:47:55.235272 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:47:55 crc kubenswrapper[4698]: I1006 11:47:55.514195 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4f2e4e2a-2d93-4a32-a6dc-f7540052e87d","Type":"ContainerDied","Data":"30e7b0af5708ffa52414403d2e8a8dda5b9f81b8604c41bf3bf4a17ec97a856e"} Oct 06 11:47:55 crc kubenswrapper[4698]: I1006 11:47:55.514250 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e7b0af5708ffa52414403d2e8a8dda5b9f81b8604c41bf3bf4a17ec97a856e" Oct 06 11:47:55 crc kubenswrapper[4698]: I1006 11:47:55.514283 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 11:47:57 crc kubenswrapper[4698]: I1006 11:47:57.090110 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:57 crc kubenswrapper[4698]: I1006 11:47:57.102979 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13806999-a8a3-4c95-b41e-6def8c208f4b-metrics-certs\") pod \"network-metrics-daemon-v8wrg\" (UID: \"13806999-a8a3-4c95-b41e-6def8c208f4b\") " pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:47:57 crc kubenswrapper[4698]: I1006 11:47:57.252372 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v8wrg" Oct 06 11:48:03 crc kubenswrapper[4698]: I1006 11:48:03.889472 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:48:07 crc kubenswrapper[4698]: E1006 11:48:07.590896 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 06 11:48:07 crc kubenswrapper[4698]: E1006 11:48:07.591668 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4njd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r4l5q_openshift-marketplace(06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:48:07 crc kubenswrapper[4698]: E1006 11:48:07.592804 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-r4l5q" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" Oct 06 11:48:09 crc kubenswrapper[4698]: E1006 11:48:09.313002 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r4l5q" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" Oct 06 11:48:09 crc kubenswrapper[4698]: E1006 11:48:09.386319 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 06 11:48:09 crc kubenswrapper[4698]: E1006 11:48:09.386527 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdcxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r4wkc_openshift-marketplace(f3910193-5a78-4c80-9eb8-8f05beb54b2f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:48:09 crc kubenswrapper[4698]: E1006 11:48:09.387910 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-r4wkc" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" Oct 06 11:48:09 crc kubenswrapper[4698]: E1006 11:48:09.465186 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 06 11:48:09 crc kubenswrapper[4698]: E1006 11:48:09.465342 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558gg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hj2gc_openshift-marketplace(dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:48:09 crc kubenswrapper[4698]: E1006 11:48:09.466688 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hj2gc" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" Oct 06 11:48:12 crc kubenswrapper[4698]: E1006 11:48:12.427296 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hj2gc" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" Oct 06 11:48:12 crc kubenswrapper[4698]: E1006 11:48:12.427767 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r4wkc" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" Oct 06 11:48:15 crc kubenswrapper[4698]: I1006 11:48:15.696086 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v8wrg"] Oct 06 11:48:16 crc kubenswrapper[4698]: E1006 11:48:16.146764 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 06 11:48:16 crc kubenswrapper[4698]: E1006 11:48:16.146994 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrbfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wzpgq_openshift-marketplace(5f73aecf-4753-4041-89d3-2df17062b304): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:48:16 crc kubenswrapper[4698]: E1006 11:48:16.148251 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wzpgq" podUID="5f73aecf-4753-4041-89d3-2df17062b304" Oct 06 11:48:16 crc kubenswrapper[4698]: E1006 11:48:16.168876 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 06 11:48:16 crc kubenswrapper[4698]: E1006 11:48:16.169452 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdlqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ld4fp_openshift-marketplace(6bd2e241-8c70-44a5-bd89-b7bd4523640e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 11:48:16 crc kubenswrapper[4698]: E1006 11:48:16.171281 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ld4fp" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" Oct 06 11:48:16 crc kubenswrapper[4698]: I1006 11:48:16.508046 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cjl22" Oct 06 11:48:16 crc kubenswrapper[4698]: I1006 11:48:16.662258 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck5qk" event={"ID":"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a","Type":"ContainerStarted","Data":"696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1"} Oct 06 11:48:16 crc kubenswrapper[4698]: I1006 11:48:16.667196 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97rch" event={"ID":"94fbe1e9-eab6-45e6-8cee-5df226a88355","Type":"ContainerStarted","Data":"f8facfecbe2cf787cca95efed45f664f30c7a1d95541db3f989cb255a0bccbba"} Oct 06 11:48:16 crc kubenswrapper[4698]: I1006 11:48:16.669587 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" event={"ID":"13806999-a8a3-4c95-b41e-6def8c208f4b","Type":"ContainerStarted","Data":"53dd9c050411df9383678e492c2cc2c3760d7a5601b60e6741a090610cc649cf"} Oct 06 11:48:16 crc kubenswrapper[4698]: E1006 11:48:16.670822 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wzpgq" podUID="5f73aecf-4753-4041-89d3-2df17062b304" Oct 06 11:48:16 crc kubenswrapper[4698]: E1006 11:48:16.674123 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ld4fp" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" Oct 06 11:48:17 crc kubenswrapper[4698]: I1006 11:48:17.678471 4698 generic.go:334] "Generic (PLEG): container finished" podID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerID="f8facfecbe2cf787cca95efed45f664f30c7a1d95541db3f989cb255a0bccbba" exitCode=0 Oct 06 11:48:17 crc kubenswrapper[4698]: I1006 11:48:17.680769 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97rch" event={"ID":"94fbe1e9-eab6-45e6-8cee-5df226a88355","Type":"ContainerDied","Data":"f8facfecbe2cf787cca95efed45f664f30c7a1d95541db3f989cb255a0bccbba"} Oct 06 11:48:17 crc kubenswrapper[4698]: I1006 11:48:17.695072 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" event={"ID":"13806999-a8a3-4c95-b41e-6def8c208f4b","Type":"ContainerStarted","Data":"4ee9985ece82592aa042a66fbf44a01dc8d73767f6b106705d97e919214f51d6"} Oct 06 11:48:17 crc kubenswrapper[4698]: I1006 11:48:17.697709 4698 generic.go:334] "Generic (PLEG): container finished" podID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerID="4e0cf78b51dd5974390a92feb5845874c8dc7afb3b019b6993ea12787ada8bac" exitCode=0 Oct 06 11:48:17 crc kubenswrapper[4698]: I1006 11:48:17.697807 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng7sj" event={"ID":"7885934c-f4f7-4615-bab8-4ac47a2eabe6","Type":"ContainerDied","Data":"4e0cf78b51dd5974390a92feb5845874c8dc7afb3b019b6993ea12787ada8bac"} Oct 06 11:48:17 crc kubenswrapper[4698]: I1006 11:48:17.703649 4698 generic.go:334] "Generic (PLEG): container finished" podID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerID="696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1" exitCode=0 Oct 06 11:48:17 crc kubenswrapper[4698]: I1006 11:48:17.703706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck5qk" event={"ID":"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a","Type":"ContainerDied","Data":"696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1"} Oct 06 11:48:18 crc kubenswrapper[4698]: I1006 11:48:18.715319 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v8wrg" event={"ID":"13806999-a8a3-4c95-b41e-6def8c208f4b","Type":"ContainerStarted","Data":"d74e153cc1cc9386d7d4b46d41c4b587ad83ecda707e47a5795c68c0f4cc5e1d"} Oct 06 11:48:18 crc kubenswrapper[4698]: I1006 11:48:18.754057 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-v8wrg" podStartSLOduration=165.753979218 podStartE2EDuration="2m45.753979218s" podCreationTimestamp="2025-10-06 11:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:48:18.740829754 +0000 UTC m=+186.153521947" watchObservedRunningTime="2025-10-06 11:48:18.753979218 +0000 UTC m=+186.166671391" Oct 06 11:48:20 crc kubenswrapper[4698]: I1006 11:48:20.736450 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck5qk" event={"ID":"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a","Type":"ContainerStarted","Data":"337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691"} Oct 06 11:48:20 crc kubenswrapper[4698]: I1006 11:48:20.742842 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97rch" event={"ID":"94fbe1e9-eab6-45e6-8cee-5df226a88355","Type":"ContainerStarted","Data":"c3eb3303c68b09da9254ed250c32acd967ee08923b5b9091a9b53611794047c4"} Oct 06 11:48:20 crc kubenswrapper[4698]: I1006 11:48:20.747082 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng7sj" event={"ID":"7885934c-f4f7-4615-bab8-4ac47a2eabe6","Type":"ContainerStarted","Data":"2dc09beb969e30263eb6e32dbdd3a2ab400f036c2382ca8aa8ee7e87e46467e3"} Oct 06 11:48:20 crc kubenswrapper[4698]: I1006 11:48:20.761859 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ck5qk" podStartSLOduration=2.759856928 podStartE2EDuration="33.761834546s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="2025-10-06 11:47:49.434834817 +0000 UTC m=+156.847526990" lastFinishedPulling="2025-10-06 11:48:20.436812405 +0000 UTC m=+187.849504608" observedRunningTime="2025-10-06 11:48:20.758820051 +0000 UTC m=+188.171512264" watchObservedRunningTime="2025-10-06 11:48:20.761834546 +0000 UTC m=+188.174526749" Oct 06 11:48:20 crc kubenswrapper[4698]: I1006 11:48:20.795193 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ng7sj" podStartSLOduration=2.815939044 podStartE2EDuration="33.795160085s" podCreationTimestamp="2025-10-06 11:47:47 +0000 UTC" firstStartedPulling="2025-10-06 11:47:49.428034023 +0000 UTC m=+156.840726196" lastFinishedPulling="2025-10-06 11:48:20.407255024 +0000 UTC m=+187.819947237" observedRunningTime="2025-10-06 11:48:20.793547089 +0000 UTC m=+188.206239302" watchObservedRunningTime="2025-10-06 11:48:20.795160085 +0000 UTC m=+188.207852298" Oct 06 11:48:20 crc kubenswrapper[4698]: I1006 11:48:20.818360 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-97rch" podStartSLOduration=2.388714838 podStartE2EDuration="36.818329635s" podCreationTimestamp="2025-10-06 11:47:44 +0000 UTC" firstStartedPulling="2025-10-06 11:47:45.26132035 +0000 UTC m=+152.674012523" lastFinishedPulling="2025-10-06 11:48:19.690935127 +0000 UTC m=+187.103627320" observedRunningTime="2025-10-06 11:48:20.814969489 +0000 UTC m=+188.227661702" watchObservedRunningTime="2025-10-06 11:48:20.818329635 +0000 UTC m=+188.231021848" Oct 06 11:48:21 crc kubenswrapper[4698]: I1006 11:48:21.164602 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 11:48:24 crc kubenswrapper[4698]: I1006 11:48:24.653235 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-97rch" Oct 06 11:48:24 crc kubenswrapper[4698]: I1006 11:48:24.653560 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-97rch" Oct 06 11:48:24 crc kubenswrapper[4698]: I1006 11:48:24.778976 4698 generic.go:334] "Generic (PLEG): container finished" podID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerID="97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e" exitCode=0 Oct 06 11:48:24 crc kubenswrapper[4698]: I1006 11:48:24.779062 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l5q" event={"ID":"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d","Type":"ContainerDied","Data":"97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e"} Oct 06 11:48:24 crc kubenswrapper[4698]: I1006 11:48:24.817105 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-97rch" Oct 06 11:48:25 crc kubenswrapper[4698]: I1006 11:48:25.235510 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:48:25 crc kubenswrapper[4698]: I1006 11:48:25.235820 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:48:27 crc kubenswrapper[4698]: I1006 11:48:27.649290 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:48:27 crc kubenswrapper[4698]: I1006 11:48:27.650050 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:48:27 crc kubenswrapper[4698]: I1006 11:48:27.737381 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:48:27 crc kubenswrapper[4698]: I1006 11:48:27.888505 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:48:28 crc kubenswrapper[4698]: I1006 11:48:28.050117 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:48:28 crc kubenswrapper[4698]: I1006 11:48:28.050170 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:48:28 crc kubenswrapper[4698]: I1006 11:48:28.101223 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:48:28 crc kubenswrapper[4698]: I1006 11:48:28.885045 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:48:30 crc kubenswrapper[4698]: I1006 11:48:30.530137 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ng7sj"] Oct 06 11:48:30 crc kubenswrapper[4698]: I1006 11:48:30.827223 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ng7sj" podUID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerName="registry-server" containerID="cri-o://2dc09beb969e30263eb6e32dbdd3a2ab400f036c2382ca8aa8ee7e87e46467e3" gracePeriod=2 Oct 06 11:48:31 crc kubenswrapper[4698]: I1006 11:48:31.838189 4698 generic.go:334] "Generic (PLEG): container finished" podID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerID="2dc09beb969e30263eb6e32dbdd3a2ab400f036c2382ca8aa8ee7e87e46467e3" exitCode=0 Oct 06 11:48:31 crc kubenswrapper[4698]: I1006 11:48:31.838267 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng7sj" event={"ID":"7885934c-f4f7-4615-bab8-4ac47a2eabe6","Type":"ContainerDied","Data":"2dc09beb969e30263eb6e32dbdd3a2ab400f036c2382ca8aa8ee7e87e46467e3"} Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.705803 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.856949 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-catalog-content\") pod \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.857375 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfz9k\" (UniqueName: \"kubernetes.io/projected/7885934c-f4f7-4615-bab8-4ac47a2eabe6-kube-api-access-dfz9k\") pod \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.857539 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-utilities\") pod \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\" (UID: \"7885934c-f4f7-4615-bab8-4ac47a2eabe6\") " Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.859161 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-utilities" (OuterVolumeSpecName: "utilities") pod "7885934c-f4f7-4615-bab8-4ac47a2eabe6" (UID: "7885934c-f4f7-4615-bab8-4ac47a2eabe6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.863165 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng7sj" event={"ID":"7885934c-f4f7-4615-bab8-4ac47a2eabe6","Type":"ContainerDied","Data":"af2057a7dd492e7b72f4553477dd95d88d7fc1cbea4abe69cc84e7748da963ac"} Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.863254 4698 scope.go:117] "RemoveContainer" containerID="2dc09beb969e30263eb6e32dbdd3a2ab400f036c2382ca8aa8ee7e87e46467e3" Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.863256 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng7sj" Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.869162 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7885934c-f4f7-4615-bab8-4ac47a2eabe6-kube-api-access-dfz9k" (OuterVolumeSpecName: "kube-api-access-dfz9k") pod "7885934c-f4f7-4615-bab8-4ac47a2eabe6" (UID: "7885934c-f4f7-4615-bab8-4ac47a2eabe6"). InnerVolumeSpecName "kube-api-access-dfz9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.930841 4698 scope.go:117] "RemoveContainer" containerID="4e0cf78b51dd5974390a92feb5845874c8dc7afb3b019b6993ea12787ada8bac" Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.937430 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7885934c-f4f7-4615-bab8-4ac47a2eabe6" (UID: "7885934c-f4f7-4615-bab8-4ac47a2eabe6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.960983 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfz9k\" (UniqueName: \"kubernetes.io/projected/7885934c-f4f7-4615-bab8-4ac47a2eabe6-kube-api-access-dfz9k\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.961161 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.961197 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7885934c-f4f7-4615-bab8-4ac47a2eabe6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:33 crc kubenswrapper[4698]: I1006 11:48:33.970100 4698 scope.go:117] "RemoveContainer" containerID="86c8821214e9c5c044c610e74a31047cadc5b4add2851a5def0244ac01f8a606" Oct 06 11:48:34 crc kubenswrapper[4698]: I1006 11:48:34.216131 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ng7sj"] Oct 06 11:48:34 crc kubenswrapper[4698]: I1006 11:48:34.229815 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ng7sj"] Oct 06 11:48:34 crc kubenswrapper[4698]: I1006 11:48:34.716449 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-97rch" Oct 06 11:48:34 crc kubenswrapper[4698]: I1006 11:48:34.872391 4698 generic.go:334] "Generic (PLEG): container finished" podID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerID="c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6" exitCode=0 Oct 06 11:48:34 crc kubenswrapper[4698]: I1006 11:48:34.872449 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj2gc" event={"ID":"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f","Type":"ContainerDied","Data":"c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6"} Oct 06 11:48:34 crc kubenswrapper[4698]: I1006 11:48:34.876772 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l5q" event={"ID":"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d","Type":"ContainerStarted","Data":"fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652"} Oct 06 11:48:35 crc kubenswrapper[4698]: I1006 11:48:35.336412 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" path="/var/lib/kubelet/pods/7885934c-f4f7-4615-bab8-4ac47a2eabe6/volumes" Oct 06 11:48:37 crc kubenswrapper[4698]: I1006 11:48:37.125819 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r4l5q" podStartSLOduration=6.091987603 podStartE2EDuration="54.125788197s" podCreationTimestamp="2025-10-06 11:47:43 +0000 UTC" firstStartedPulling="2025-10-06 11:47:45.266158577 +0000 UTC m=+152.678850750" lastFinishedPulling="2025-10-06 11:48:33.299959131 +0000 UTC m=+200.712651344" observedRunningTime="2025-10-06 11:48:34.937459028 +0000 UTC m=+202.350151211" watchObservedRunningTime="2025-10-06 11:48:37.125788197 +0000 UTC m=+204.538480410" Oct 06 11:48:37 crc kubenswrapper[4698]: I1006 11:48:37.128805 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-97rch"] Oct 06 11:48:37 crc kubenswrapper[4698]: I1006 11:48:37.129563 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-97rch" podUID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerName="registry-server" containerID="cri-o://c3eb3303c68b09da9254ed250c32acd967ee08923b5b9091a9b53611794047c4" gracePeriod=2 Oct 06 11:48:37 crc kubenswrapper[4698]: I1006 11:48:37.910192 4698 generic.go:334] "Generic (PLEG): container finished" podID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerID="c3eb3303c68b09da9254ed250c32acd967ee08923b5b9091a9b53611794047c4" exitCode=0 Oct 06 11:48:37 crc kubenswrapper[4698]: I1006 11:48:37.910277 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97rch" event={"ID":"94fbe1e9-eab6-45e6-8cee-5df226a88355","Type":"ContainerDied","Data":"c3eb3303c68b09da9254ed250c32acd967ee08923b5b9091a9b53611794047c4"} Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.718382 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97rch" Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.836535 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-utilities\") pod \"94fbe1e9-eab6-45e6-8cee-5df226a88355\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.836631 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44hfp\" (UniqueName: \"kubernetes.io/projected/94fbe1e9-eab6-45e6-8cee-5df226a88355-kube-api-access-44hfp\") pod \"94fbe1e9-eab6-45e6-8cee-5df226a88355\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.836673 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-catalog-content\") pod \"94fbe1e9-eab6-45e6-8cee-5df226a88355\" (UID: \"94fbe1e9-eab6-45e6-8cee-5df226a88355\") " Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.837966 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-utilities" (OuterVolumeSpecName: "utilities") pod "94fbe1e9-eab6-45e6-8cee-5df226a88355" (UID: "94fbe1e9-eab6-45e6-8cee-5df226a88355"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.846170 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94fbe1e9-eab6-45e6-8cee-5df226a88355-kube-api-access-44hfp" (OuterVolumeSpecName: "kube-api-access-44hfp") pod "94fbe1e9-eab6-45e6-8cee-5df226a88355" (UID: "94fbe1e9-eab6-45e6-8cee-5df226a88355"). InnerVolumeSpecName "kube-api-access-44hfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.893594 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94fbe1e9-eab6-45e6-8cee-5df226a88355" (UID: "94fbe1e9-eab6-45e6-8cee-5df226a88355"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.921847 4698 generic.go:334] "Generic (PLEG): container finished" podID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerID="7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e" exitCode=0 Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.922087 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4wkc" event={"ID":"f3910193-5a78-4c80-9eb8-8f05beb54b2f","Type":"ContainerDied","Data":"7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e"} Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.928263 4698 generic.go:334] "Generic (PLEG): container finished" podID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerID="87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9" exitCode=0 Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.928311 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ld4fp" event={"ID":"6bd2e241-8c70-44a5-bd89-b7bd4523640e","Type":"ContainerDied","Data":"87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9"} Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.937938 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44hfp\" (UniqueName: \"kubernetes.io/projected/94fbe1e9-eab6-45e6-8cee-5df226a88355-kube-api-access-44hfp\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.938677 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.938691 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94fbe1e9-eab6-45e6-8cee-5df226a88355-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.945355 4698 generic.go:334] "Generic (PLEG): container finished" podID="5f73aecf-4753-4041-89d3-2df17062b304" containerID="a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273" exitCode=0 Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.945413 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzpgq" event={"ID":"5f73aecf-4753-4041-89d3-2df17062b304","Type":"ContainerDied","Data":"a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273"} Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.960500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97rch" event={"ID":"94fbe1e9-eab6-45e6-8cee-5df226a88355","Type":"ContainerDied","Data":"750a9227acdd9a1b4f14ecd410a81ff0162174ebcafb78d238df43ca1dd48835"} Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.960576 4698 scope.go:117] "RemoveContainer" containerID="c3eb3303c68b09da9254ed250c32acd967ee08923b5b9091a9b53611794047c4" Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.960748 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97rch" Oct 06 11:48:38 crc kubenswrapper[4698]: I1006 11:48:38.988816 4698 scope.go:117] "RemoveContainer" containerID="f8facfecbe2cf787cca95efed45f664f30c7a1d95541db3f989cb255a0bccbba" Oct 06 11:48:39 crc kubenswrapper[4698]: I1006 11:48:39.006958 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-97rch"] Oct 06 11:48:39 crc kubenswrapper[4698]: I1006 11:48:39.012665 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-97rch"] Oct 06 11:48:39 crc kubenswrapper[4698]: I1006 11:48:39.023541 4698 scope.go:117] "RemoveContainer" containerID="4d47054248e867788cbb67b99a73e78fb792e395bc1c0bec46581b254cd646c7" Oct 06 11:48:39 crc kubenswrapper[4698]: I1006 11:48:39.338940 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94fbe1e9-eab6-45e6-8cee-5df226a88355" path="/var/lib/kubelet/pods/94fbe1e9-eab6-45e6-8cee-5df226a88355/volumes" Oct 06 11:48:39 crc kubenswrapper[4698]: I1006 11:48:39.982177 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzpgq" event={"ID":"5f73aecf-4753-4041-89d3-2df17062b304","Type":"ContainerStarted","Data":"8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c"} Oct 06 11:48:39 crc kubenswrapper[4698]: I1006 11:48:39.993408 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4wkc" event={"ID":"f3910193-5a78-4c80-9eb8-8f05beb54b2f","Type":"ContainerStarted","Data":"423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c"} Oct 06 11:48:39 crc kubenswrapper[4698]: I1006 11:48:39.995680 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj2gc" event={"ID":"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f","Type":"ContainerStarted","Data":"d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d"} Oct 06 11:48:40 crc kubenswrapper[4698]: I1006 11:48:40.007552 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ld4fp" event={"ID":"6bd2e241-8c70-44a5-bd89-b7bd4523640e","Type":"ContainerStarted","Data":"bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7"} Oct 06 11:48:40 crc kubenswrapper[4698]: I1006 11:48:40.011580 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wzpgq" podStartSLOduration=2.813935949 podStartE2EDuration="54.011560058s" podCreationTimestamp="2025-10-06 11:47:46 +0000 UTC" firstStartedPulling="2025-10-06 11:47:48.409488093 +0000 UTC m=+155.822180266" lastFinishedPulling="2025-10-06 11:48:39.607112162 +0000 UTC m=+207.019804375" observedRunningTime="2025-10-06 11:48:40.009679708 +0000 UTC m=+207.422371881" watchObservedRunningTime="2025-10-06 11:48:40.011560058 +0000 UTC m=+207.424252231" Oct 06 11:48:40 crc kubenswrapper[4698]: I1006 11:48:40.037969 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r4wkc" podStartSLOduration=2.843923737 podStartE2EDuration="56.037948909s" podCreationTimestamp="2025-10-06 11:47:44 +0000 UTC" firstStartedPulling="2025-10-06 11:47:46.323883322 +0000 UTC m=+153.736575495" lastFinishedPulling="2025-10-06 11:48:39.517908484 +0000 UTC m=+206.930600667" observedRunningTime="2025-10-06 11:48:40.034900697 +0000 UTC m=+207.447592880" watchObservedRunningTime="2025-10-06 11:48:40.037948909 +0000 UTC m=+207.450641082" Oct 06 11:48:40 crc kubenswrapper[4698]: I1006 11:48:40.061406 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hj2gc" podStartSLOduration=3.5831408380000003 podStartE2EDuration="56.06139053s" podCreationTimestamp="2025-10-06 11:47:44 +0000 UTC" firstStartedPulling="2025-10-06 11:47:46.323923843 +0000 UTC m=+153.736616016" lastFinishedPulling="2025-10-06 11:48:38.802173535 +0000 UTC m=+206.214865708" observedRunningTime="2025-10-06 11:48:40.060657551 +0000 UTC m=+207.473349724" watchObservedRunningTime="2025-10-06 11:48:40.06139053 +0000 UTC m=+207.474082703" Oct 06 11:48:40 crc kubenswrapper[4698]: I1006 11:48:40.082691 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ld4fp" podStartSLOduration=3.942720323 podStartE2EDuration="55.082674466s" podCreationTimestamp="2025-10-06 11:47:45 +0000 UTC" firstStartedPulling="2025-10-06 11:47:48.386249431 +0000 UTC m=+155.798941604" lastFinishedPulling="2025-10-06 11:48:39.526203534 +0000 UTC m=+206.938895747" observedRunningTime="2025-10-06 11:48:40.082581074 +0000 UTC m=+207.495273237" watchObservedRunningTime="2025-10-06 11:48:40.082674466 +0000 UTC m=+207.495366639" Oct 06 11:48:44 crc kubenswrapper[4698]: I1006 11:48:44.245194 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:48:44 crc kubenswrapper[4698]: I1006 11:48:44.246181 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:48:44 crc kubenswrapper[4698]: I1006 11:48:44.328204 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:48:44 crc kubenswrapper[4698]: I1006 11:48:44.464681 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:48:44 crc kubenswrapper[4698]: I1006 11:48:44.464741 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:48:44 crc kubenswrapper[4698]: I1006 11:48:44.536359 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:48:44 crc kubenswrapper[4698]: I1006 11:48:44.907648 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:48:44 crc kubenswrapper[4698]: I1006 11:48:44.907733 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:48:44 crc kubenswrapper[4698]: I1006 11:48:44.976157 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:48:45 crc kubenswrapper[4698]: I1006 11:48:45.087076 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:48:45 crc kubenswrapper[4698]: I1006 11:48:45.087161 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:48:45 crc kubenswrapper[4698]: I1006 11:48:45.089452 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:48:46 crc kubenswrapper[4698]: I1006 11:48:46.237501 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:48:46 crc kubenswrapper[4698]: I1006 11:48:46.238135 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:48:46 crc kubenswrapper[4698]: I1006 11:48:46.310348 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:48:46 crc kubenswrapper[4698]: I1006 11:48:46.661878 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:48:46 crc kubenswrapper[4698]: I1006 11:48:46.661942 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:48:46 crc kubenswrapper[4698]: I1006 11:48:46.735962 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:48:47 crc kubenswrapper[4698]: I1006 11:48:47.102593 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:48:47 crc kubenswrapper[4698]: I1006 11:48:47.104914 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.324453 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4wkc"] Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.325223 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r4wkc" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerName="registry-server" containerID="cri-o://423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c" gracePeriod=2 Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.747933 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.776606 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-utilities\") pod \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.776694 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-catalog-content\") pod \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.776740 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdcxf\" (UniqueName: \"kubernetes.io/projected/f3910193-5a78-4c80-9eb8-8f05beb54b2f-kube-api-access-hdcxf\") pod \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\" (UID: \"f3910193-5a78-4c80-9eb8-8f05beb54b2f\") " Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.778990 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-utilities" (OuterVolumeSpecName: "utilities") pod "f3910193-5a78-4c80-9eb8-8f05beb54b2f" (UID: "f3910193-5a78-4c80-9eb8-8f05beb54b2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.788760 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3910193-5a78-4c80-9eb8-8f05beb54b2f-kube-api-access-hdcxf" (OuterVolumeSpecName: "kube-api-access-hdcxf") pod "f3910193-5a78-4c80-9eb8-8f05beb54b2f" (UID: "f3910193-5a78-4c80-9eb8-8f05beb54b2f"). InnerVolumeSpecName "kube-api-access-hdcxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.837505 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3910193-5a78-4c80-9eb8-8f05beb54b2f" (UID: "f3910193-5a78-4c80-9eb8-8f05beb54b2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.878527 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.878580 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdcxf\" (UniqueName: \"kubernetes.io/projected/f3910193-5a78-4c80-9eb8-8f05beb54b2f-kube-api-access-hdcxf\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:49 crc kubenswrapper[4698]: I1006 11:48:49.878594 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3910193-5a78-4c80-9eb8-8f05beb54b2f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.074461 4698 generic.go:334] "Generic (PLEG): container finished" podID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerID="423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c" exitCode=0 Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.074516 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4wkc" event={"ID":"f3910193-5a78-4c80-9eb8-8f05beb54b2f","Type":"ContainerDied","Data":"423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c"} Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.074550 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4wkc" Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.074578 4698 scope.go:117] "RemoveContainer" containerID="423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c" Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.074560 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4wkc" event={"ID":"f3910193-5a78-4c80-9eb8-8f05beb54b2f","Type":"ContainerDied","Data":"8fd2fa207bd34ba998fdfb2b1863a37cac3cf903645b6081fcd88587abb07d4e"} Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.103311 4698 scope.go:117] "RemoveContainer" containerID="7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e" Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.103619 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4wkc"] Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.107351 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r4wkc"] Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.126703 4698 scope.go:117] "RemoveContainer" containerID="2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96" Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.141351 4698 scope.go:117] "RemoveContainer" containerID="423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c" Oct 06 11:48:50 crc kubenswrapper[4698]: E1006 11:48:50.141770 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c\": container with ID starting with 423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c not found: ID does not exist" containerID="423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c" Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.142371 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c"} err="failed to get container status \"423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c\": rpc error: code = NotFound desc = could not find container \"423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c\": container with ID starting with 423854f655163da44046f4f52326b645a0299ecdfc9620a3acf08ac57540db0c not found: ID does not exist" Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.142426 4698 scope.go:117] "RemoveContainer" containerID="7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e" Oct 06 11:48:50 crc kubenswrapper[4698]: E1006 11:48:50.142813 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e\": container with ID starting with 7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e not found: ID does not exist" containerID="7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e" Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.142837 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e"} err="failed to get container status \"7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e\": rpc error: code = NotFound desc = could not find container \"7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e\": container with ID starting with 7e261cce5a4827701e20f12600bc08b429fc94ca48e7cfbac3b426a4e215a79e not found: ID does not exist" Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.142851 4698 scope.go:117] "RemoveContainer" containerID="2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96" Oct 06 11:48:50 crc kubenswrapper[4698]: E1006 11:48:50.143152 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96\": container with ID starting with 2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96 not found: ID does not exist" containerID="2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96" Oct 06 11:48:50 crc kubenswrapper[4698]: I1006 11:48:50.143179 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96"} err="failed to get container status \"2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96\": rpc error: code = NotFound desc = could not find container \"2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96\": container with ID starting with 2bb962a15ebca43a20d6ecfc7e20d093f16bf5444ee71671175118158f98ca96 not found: ID does not exist" Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.122262 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzpgq"] Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.122609 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wzpgq" podUID="5f73aecf-4753-4041-89d3-2df17062b304" containerName="registry-server" containerID="cri-o://8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c" gracePeriod=2 Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.338846 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" path="/var/lib/kubelet/pods/f3910193-5a78-4c80-9eb8-8f05beb54b2f/volumes" Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.535552 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.604831 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-utilities\") pod \"5f73aecf-4753-4041-89d3-2df17062b304\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.604937 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-catalog-content\") pod \"5f73aecf-4753-4041-89d3-2df17062b304\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.604983 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrbfb\" (UniqueName: \"kubernetes.io/projected/5f73aecf-4753-4041-89d3-2df17062b304-kube-api-access-jrbfb\") pod \"5f73aecf-4753-4041-89d3-2df17062b304\" (UID: \"5f73aecf-4753-4041-89d3-2df17062b304\") " Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.606654 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-utilities" (OuterVolumeSpecName: "utilities") pod "5f73aecf-4753-4041-89d3-2df17062b304" (UID: "5f73aecf-4753-4041-89d3-2df17062b304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.611806 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f73aecf-4753-4041-89d3-2df17062b304-kube-api-access-jrbfb" (OuterVolumeSpecName: "kube-api-access-jrbfb") pod "5f73aecf-4753-4041-89d3-2df17062b304" (UID: "5f73aecf-4753-4041-89d3-2df17062b304"). InnerVolumeSpecName "kube-api-access-jrbfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.617935 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f73aecf-4753-4041-89d3-2df17062b304" (UID: "5f73aecf-4753-4041-89d3-2df17062b304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.707549 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.707668 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f73aecf-4753-4041-89d3-2df17062b304-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:51 crc kubenswrapper[4698]: I1006 11:48:51.707748 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrbfb\" (UniqueName: \"kubernetes.io/projected/5f73aecf-4753-4041-89d3-2df17062b304-kube-api-access-jrbfb\") on node \"crc\" DevicePath \"\"" Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.094090 4698 generic.go:334] "Generic (PLEG): container finished" podID="5f73aecf-4753-4041-89d3-2df17062b304" containerID="8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c" exitCode=0 Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.094174 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzpgq" event={"ID":"5f73aecf-4753-4041-89d3-2df17062b304","Type":"ContainerDied","Data":"8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c"} Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.094704 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzpgq" event={"ID":"5f73aecf-4753-4041-89d3-2df17062b304","Type":"ContainerDied","Data":"60a34d0924f99a3ab6614a075e0bbe74875abcd0c2a4c86c832cbba52eeecabf"} Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.094737 4698 scope.go:117] "RemoveContainer" containerID="8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c" Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.094419 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzpgq" Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.119814 4698 scope.go:117] "RemoveContainer" containerID="a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273" Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.148334 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzpgq"] Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.153446 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzpgq"] Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.156217 4698 scope.go:117] "RemoveContainer" containerID="8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3" Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.173809 4698 scope.go:117] "RemoveContainer" containerID="8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c" Oct 06 11:48:52 crc kubenswrapper[4698]: E1006 11:48:52.174532 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c\": container with ID starting with 8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c not found: ID does not exist" containerID="8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c" Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.174746 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c"} err="failed to get container status \"8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c\": rpc error: code = NotFound desc = could not find container \"8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c\": container with ID starting with 8d6cb05bfba60be53a12e055af3779a7e91c2259edc078b3e17e1a8e591fdc8c not found: ID does not exist" Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.175079 4698 scope.go:117] "RemoveContainer" containerID="a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273" Oct 06 11:48:52 crc kubenswrapper[4698]: E1006 11:48:52.175846 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273\": container with ID starting with a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273 not found: ID does not exist" containerID="a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273" Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.176047 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273"} err="failed to get container status \"a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273\": rpc error: code = NotFound desc = could not find container \"a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273\": container with ID starting with a2d96103ddde85d403e17b8f40b21a61f8821753cedbba1c4f62be761ce9a273 not found: ID does not exist" Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.176234 4698 scope.go:117] "RemoveContainer" containerID="8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3" Oct 06 11:48:52 crc kubenswrapper[4698]: E1006 11:48:52.176772 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3\": container with ID starting with 8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3 not found: ID does not exist" containerID="8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3" Oct 06 11:48:52 crc kubenswrapper[4698]: I1006 11:48:52.176827 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3"} err="failed to get container status \"8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3\": rpc error: code = NotFound desc = could not find container \"8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3\": container with ID starting with 8f531219257aed3f8da6aca5fd183be9ba94548aef096933471e8b18c96ee9e3 not found: ID does not exist" Oct 06 11:48:53 crc kubenswrapper[4698]: I1006 11:48:53.338040 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f73aecf-4753-4041-89d3-2df17062b304" path="/var/lib/kubelet/pods/5f73aecf-4753-4041-89d3-2df17062b304/volumes" Oct 06 11:48:54 crc kubenswrapper[4698]: I1006 11:48:54.991414 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wcq9d"] Oct 06 11:48:55 crc kubenswrapper[4698]: I1006 11:48:55.235545 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:48:55 crc kubenswrapper[4698]: I1006 11:48:55.235899 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:48:55 crc kubenswrapper[4698]: I1006 11:48:55.235960 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:48:55 crc kubenswrapper[4698]: I1006 11:48:55.236785 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 11:48:55 crc kubenswrapper[4698]: I1006 11:48:55.236844 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b" gracePeriod=600 Oct 06 11:48:56 crc kubenswrapper[4698]: I1006 11:48:56.141687 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b" exitCode=0 Oct 06 11:48:56 crc kubenswrapper[4698]: I1006 11:48:56.141819 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b"} Oct 06 11:48:56 crc kubenswrapper[4698]: I1006 11:48:56.142235 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"b716722665ea296eaba31821da2396c6318752207c56c9a3dc888521bc6f3be5"} Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.037222 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" podUID="2603dc30-08f8-4a0c-946f-4d4f971fae56" containerName="oauth-openshift" containerID="cri-o://8271b123d9471caebd1720e13a9c86e6017bdfba1b52e8941b3d5f50b79146a6" gracePeriod=15 Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.351134 4698 generic.go:334] "Generic (PLEG): container finished" podID="2603dc30-08f8-4a0c-946f-4d4f971fae56" containerID="8271b123d9471caebd1720e13a9c86e6017bdfba1b52e8941b3d5f50b79146a6" exitCode=0 Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.351211 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" event={"ID":"2603dc30-08f8-4a0c-946f-4d4f971fae56","Type":"ContainerDied","Data":"8271b123d9471caebd1720e13a9c86e6017bdfba1b52e8941b3d5f50b79146a6"} Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.519531 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.579964 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-c8559d799-mtmxp"] Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580451 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerName="extract-utilities" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580481 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerName="extract-utilities" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580500 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580513 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580535 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f73aecf-4753-4041-89d3-2df17062b304" containerName="extract-content" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580549 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f73aecf-4753-4041-89d3-2df17062b304" containerName="extract-content" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580569 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2e4e2a-2d93-4a32-a6dc-f7540052e87d" containerName="pruner" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580581 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2e4e2a-2d93-4a32-a6dc-f7540052e87d" containerName="pruner" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580599 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerName="extract-content" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580611 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerName="extract-content" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580634 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580648 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580667 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f73aecf-4753-4041-89d3-2df17062b304" containerName="extract-utilities" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580679 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f73aecf-4753-4041-89d3-2df17062b304" containerName="extract-utilities" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580695 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerName="extract-utilities" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580710 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerName="extract-utilities" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580727 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580739 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580757 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerName="extract-utilities" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580769 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerName="extract-utilities" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580786 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2603dc30-08f8-4a0c-946f-4d4f971fae56" containerName="oauth-openshift" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580798 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2603dc30-08f8-4a0c-946f-4d4f971fae56" containerName="oauth-openshift" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580819 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f73aecf-4753-4041-89d3-2df17062b304" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580860 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f73aecf-4753-4041-89d3-2df17062b304" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580874 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd42e4f5-72c4-4d35-920c-1ddbbc3d1851" containerName="pruner" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580886 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd42e4f5-72c4-4d35-920c-1ddbbc3d1851" containerName="pruner" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580899 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerName="extract-content" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580911 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerName="extract-content" Oct 06 11:49:20 crc kubenswrapper[4698]: E1006 11:49:20.580933 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerName="extract-content" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.580945 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerName="extract-content" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.581174 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3910193-5a78-4c80-9eb8-8f05beb54b2f" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.581202 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7885934c-f4f7-4615-bab8-4ac47a2eabe6" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.581225 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="94fbe1e9-eab6-45e6-8cee-5df226a88355" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.581270 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2603dc30-08f8-4a0c-946f-4d4f971fae56" containerName="oauth-openshift" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.581288 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2e4e2a-2d93-4a32-a6dc-f7540052e87d" containerName="pruner" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.581301 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd42e4f5-72c4-4d35-920c-1ddbbc3d1851" containerName="pruner" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.581318 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f73aecf-4753-4041-89d3-2df17062b304" containerName="registry-server" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.582101 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.585375 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c8559d799-mtmxp"] Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.655958 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-error\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.656699 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-ocp-branding-template\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.656837 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-provider-selection\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.656958 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-cliconfig\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.657825 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-session\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.657958 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-login\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.658394 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-serving-cert\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.657759 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.658502 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-trusted-ca-bundle\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.658695 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-router-certs\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.658782 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-policies\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.658872 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpwq5\" (UniqueName: \"kubernetes.io/projected/2603dc30-08f8-4a0c-946f-4d4f971fae56-kube-api-access-bpwq5\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.658943 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-idp-0-file-data\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.659139 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-dir\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.659228 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-service-ca\") pod \"2603dc30-08f8-4a0c-946f-4d4f971fae56\" (UID: \"2603dc30-08f8-4a0c-946f-4d4f971fae56\") " Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.659566 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.659669 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-audit-policies\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.659753 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.659753 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.659817 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-session\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.659913 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.659997 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhlj\" (UniqueName: \"kubernetes.io/projected/832e4c55-b558-4265-9bbd-81b8424ba85a-kube-api-access-lqhlj\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660343 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-router-certs\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660384 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660430 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-template-login\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660492 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832e4c55-b558-4265-9bbd-81b8424ba85a-audit-dir\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660560 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-template-error\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660618 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660712 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-service-ca\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660777 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660860 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660937 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660962 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.660986 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.661295 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.662335 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.665092 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.665712 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.666474 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.666553 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2603dc30-08f8-4a0c-946f-4d4f971fae56-kube-api-access-bpwq5" (OuterVolumeSpecName: "kube-api-access-bpwq5") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "kube-api-access-bpwq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.669746 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.670097 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.670311 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.670923 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.671644 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2603dc30-08f8-4a0c-946f-4d4f971fae56" (UID: "2603dc30-08f8-4a0c-946f-4d4f971fae56"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762374 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762447 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762483 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-audit-policies\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762514 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762538 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-session\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762571 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762593 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhlj\" (UniqueName: \"kubernetes.io/projected/832e4c55-b558-4265-9bbd-81b8424ba85a-kube-api-access-lqhlj\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762628 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-router-certs\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762718 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-template-login\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762748 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832e4c55-b558-4265-9bbd-81b8424ba85a-audit-dir\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762815 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-template-error\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762842 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762883 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-service-ca\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762911 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762968 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.762983 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.763002 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.763035 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.763050 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.763064 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpwq5\" (UniqueName: \"kubernetes.io/projected/2603dc30-08f8-4a0c-946f-4d4f971fae56-kube-api-access-bpwq5\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.763079 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.763095 4698 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2603dc30-08f8-4a0c-946f-4d4f971fae56-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.763110 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.763123 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.763136 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2603dc30-08f8-4a0c-946f-4d4f971fae56-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.764047 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832e4c55-b558-4265-9bbd-81b8424ba85a-audit-dir\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.764287 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-audit-policies\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.765191 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.766398 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.766623 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-service-ca\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.767642 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-template-login\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.767910 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-session\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.768448 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-template-error\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.768923 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.769502 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.769998 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-system-router-certs\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.770152 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.770641 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832e4c55-b558-4265-9bbd-81b8424ba85a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.789565 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhlj\" (UniqueName: \"kubernetes.io/projected/832e4c55-b558-4265-9bbd-81b8424ba85a-kube-api-access-lqhlj\") pod \"oauth-openshift-c8559d799-mtmxp\" (UID: \"832e4c55-b558-4265-9bbd-81b8424ba85a\") " pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:20 crc kubenswrapper[4698]: I1006 11:49:20.903219 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:21 crc kubenswrapper[4698]: I1006 11:49:21.180345 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c8559d799-mtmxp"] Oct 06 11:49:21 crc kubenswrapper[4698]: I1006 11:49:21.362687 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" event={"ID":"832e4c55-b558-4265-9bbd-81b8424ba85a","Type":"ContainerStarted","Data":"07949062289cb238344d81a468f8d7a8d72aa5860161ca1a5f7c8520496f89aa"} Oct 06 11:49:21 crc kubenswrapper[4698]: I1006 11:49:21.365183 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" event={"ID":"2603dc30-08f8-4a0c-946f-4d4f971fae56","Type":"ContainerDied","Data":"7d68bda39eb9097540b22cdde7c074520bc3200d312307cdc5f23f6d0a80841e"} Oct 06 11:49:21 crc kubenswrapper[4698]: I1006 11:49:21.365254 4698 scope.go:117] "RemoveContainer" containerID="8271b123d9471caebd1720e13a9c86e6017bdfba1b52e8941b3d5f50b79146a6" Oct 06 11:49:21 crc kubenswrapper[4698]: I1006 11:49:21.365315 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wcq9d" Oct 06 11:49:21 crc kubenswrapper[4698]: I1006 11:49:21.399573 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wcq9d"] Oct 06 11:49:21 crc kubenswrapper[4698]: I1006 11:49:21.403096 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wcq9d"] Oct 06 11:49:22 crc kubenswrapper[4698]: I1006 11:49:22.377171 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" event={"ID":"832e4c55-b558-4265-9bbd-81b8424ba85a","Type":"ContainerStarted","Data":"f19d22d3824efc5ecba46da1861c0293703da5974768c529b2af8d529899d8a4"} Oct 06 11:49:22 crc kubenswrapper[4698]: I1006 11:49:22.377692 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:22 crc kubenswrapper[4698]: I1006 11:49:22.388443 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" Oct 06 11:49:22 crc kubenswrapper[4698]: I1006 11:49:22.417885 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-c8559d799-mtmxp" podStartSLOduration=27.417843207 podStartE2EDuration="27.417843207s" podCreationTimestamp="2025-10-06 11:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:22.410226335 +0000 UTC m=+249.822918548" watchObservedRunningTime="2025-10-06 11:49:22.417843207 +0000 UTC m=+249.830535450" Oct 06 11:49:23 crc kubenswrapper[4698]: I1006 11:49:23.353637 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2603dc30-08f8-4a0c-946f-4d4f971fae56" path="/var/lib/kubelet/pods/2603dc30-08f8-4a0c-946f-4d4f971fae56/volumes" Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.931424 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hj2gc"] Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.932347 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hj2gc" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerName="registry-server" containerID="cri-o://d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d" gracePeriod=30 Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.956566 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4l5q"] Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.956980 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r4l5q" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerName="registry-server" containerID="cri-o://fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652" gracePeriod=30 Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.962006 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-69fdx"] Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.962292 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" podUID="2ecabdc0-bd56-4f58-b619-32c52a2ade73" containerName="marketplace-operator" containerID="cri-o://16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb" gracePeriod=30 Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.973083 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ld4fp"] Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.973554 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ld4fp" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerName="registry-server" containerID="cri-o://bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7" gracePeriod=30 Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.983028 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ck5qk"] Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.983729 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ck5qk" podUID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerName="registry-server" containerID="cri-o://337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691" gracePeriod=30 Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.990456 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbkqv"] Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.991341 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:43 crc kubenswrapper[4698]: I1006 11:49:43.996588 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbkqv"] Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.147183 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l292\" (UniqueName: \"kubernetes.io/projected/debcb559-cc53-4d24-9eb0-233c76c3cab1-kube-api-access-5l292\") pod \"marketplace-operator-79b997595-sbkqv\" (UID: \"debcb559-cc53-4d24-9eb0-233c76c3cab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.147242 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/debcb559-cc53-4d24-9eb0-233c76c3cab1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbkqv\" (UID: \"debcb559-cc53-4d24-9eb0-233c76c3cab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.147366 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/debcb559-cc53-4d24-9eb0-233c76c3cab1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbkqv\" (UID: \"debcb559-cc53-4d24-9eb0-233c76c3cab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.246993 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652 is running failed: container process not found" containerID="fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652" cmd=["grpc_health_probe","-addr=:50051"] Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.248061 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l292\" (UniqueName: \"kubernetes.io/projected/debcb559-cc53-4d24-9eb0-233c76c3cab1-kube-api-access-5l292\") pod \"marketplace-operator-79b997595-sbkqv\" (UID: \"debcb559-cc53-4d24-9eb0-233c76c3cab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.248083 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652 is running failed: container process not found" containerID="fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652" cmd=["grpc_health_probe","-addr=:50051"] Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.248102 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/debcb559-cc53-4d24-9eb0-233c76c3cab1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbkqv\" (UID: \"debcb559-cc53-4d24-9eb0-233c76c3cab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.248158 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/debcb559-cc53-4d24-9eb0-233c76c3cab1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbkqv\" (UID: \"debcb559-cc53-4d24-9eb0-233c76c3cab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.248509 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652 is running failed: container process not found" containerID="fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652" cmd=["grpc_health_probe","-addr=:50051"] Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.248538 4698 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-r4l5q" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerName="registry-server" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.249884 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/debcb559-cc53-4d24-9eb0-233c76c3cab1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbkqv\" (UID: \"debcb559-cc53-4d24-9eb0-233c76c3cab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.254552 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/debcb559-cc53-4d24-9eb0-233c76c3cab1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbkqv\" (UID: \"debcb559-cc53-4d24-9eb0-233c76c3cab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.265949 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l292\" (UniqueName: \"kubernetes.io/projected/debcb559-cc53-4d24-9eb0-233c76c3cab1-kube-api-access-5l292\") pod \"marketplace-operator-79b997595-sbkqv\" (UID: \"debcb559-cc53-4d24-9eb0-233c76c3cab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.314786 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.456110 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.463201 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.486236 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.488997 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.491785 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.587270 4698 generic.go:334] "Generic (PLEG): container finished" podID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerID="d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d" exitCode=0 Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.587426 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj2gc" event={"ID":"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f","Type":"ContainerDied","Data":"d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d"} Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.587479 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj2gc" event={"ID":"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f","Type":"ContainerDied","Data":"3638b0470eb123828f4a0a89c4410df2ee57baf4686391958ce095d3863a52af"} Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.587512 4698 scope.go:117] "RemoveContainer" containerID="d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.587572 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj2gc" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.595786 4698 generic.go:334] "Generic (PLEG): container finished" podID="2ecabdc0-bd56-4f58-b619-32c52a2ade73" containerID="16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb" exitCode=0 Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.596335 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" event={"ID":"2ecabdc0-bd56-4f58-b619-32c52a2ade73","Type":"ContainerDied","Data":"16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb"} Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.596370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" event={"ID":"2ecabdc0-bd56-4f58-b619-32c52a2ade73","Type":"ContainerDied","Data":"a797b2c91ebe7b95f6cc9e1152947b7a1aa4d645411b692739aa2b649a3c5f0b"} Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.597002 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-69fdx" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.626448 4698 generic.go:334] "Generic (PLEG): container finished" podID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerID="fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652" exitCode=0 Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.626551 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4l5q" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.626493 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l5q" event={"ID":"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d","Type":"ContainerDied","Data":"fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652"} Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.626616 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l5q" event={"ID":"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d","Type":"ContainerDied","Data":"f78642e88eac7dfa75730dcd0353c186605818f9dd7ac06eb234e87b0018c94c"} Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.643993 4698 generic.go:334] "Generic (PLEG): container finished" podID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerID="bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7" exitCode=0 Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.644081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ld4fp" event={"ID":"6bd2e241-8c70-44a5-bd89-b7bd4523640e","Type":"ContainerDied","Data":"bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7"} Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.644119 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ld4fp" event={"ID":"6bd2e241-8c70-44a5-bd89-b7bd4523640e","Type":"ContainerDied","Data":"8818236d842372f82eac2626d9ff1aa241bea85b356d7fe9850068f6cbf778e8"} Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.644212 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ld4fp" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.647007 4698 generic.go:334] "Generic (PLEG): container finished" podID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerID="337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691" exitCode=0 Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.647046 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck5qk" event={"ID":"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a","Type":"ContainerDied","Data":"337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691"} Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.647061 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck5qk" event={"ID":"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a","Type":"ContainerDied","Data":"9fd407c54f6d07f164afb9949edb632990943d85e601967d3b821dc943df1e04"} Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.647174 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck5qk" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.668544 4698 scope.go:117] "RemoveContainer" containerID="c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685501 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-trusted-ca\") pod \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685558 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-catalog-content\") pod \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685611 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-utilities\") pod \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685646 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-catalog-content\") pod \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685663 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdlqg\" (UniqueName: \"kubernetes.io/projected/6bd2e241-8c70-44a5-bd89-b7bd4523640e-kube-api-access-fdlqg\") pod \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\" (UID: \"6bd2e241-8c70-44a5-bd89-b7bd4523640e\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685689 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4njd\" (UniqueName: \"kubernetes.io/projected/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-kube-api-access-x4njd\") pod \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685710 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtphv\" (UniqueName: \"kubernetes.io/projected/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-kube-api-access-gtphv\") pod \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685756 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-utilities\") pod \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685778 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-utilities\") pod \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685803 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-utilities\") pod \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\" (UID: \"06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685833 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-catalog-content\") pod \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\" (UID: \"8d3b4a5d-7af6-45dc-b7b2-bf61d415808a\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685862 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctzp4\" (UniqueName: \"kubernetes.io/projected/2ecabdc0-bd56-4f58-b619-32c52a2ade73-kube-api-access-ctzp4\") pod \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685890 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-operator-metrics\") pod \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\" (UID: \"2ecabdc0-bd56-4f58-b619-32c52a2ade73\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685919 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-catalog-content\") pod \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.685963 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-558gg\" (UniqueName: \"kubernetes.io/projected/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-kube-api-access-558gg\") pod \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\" (UID: \"dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f\") " Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.686873 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2ecabdc0-bd56-4f58-b619-32c52a2ade73" (UID: "2ecabdc0-bd56-4f58-b619-32c52a2ade73"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.687335 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-utilities" (OuterVolumeSpecName: "utilities") pod "8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" (UID: "8d3b4a5d-7af6-45dc-b7b2-bf61d415808a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.687979 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-utilities" (OuterVolumeSpecName: "utilities") pod "6bd2e241-8c70-44a5-bd89-b7bd4523640e" (UID: "6bd2e241-8c70-44a5-bd89-b7bd4523640e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.688546 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-utilities" (OuterVolumeSpecName: "utilities") pod "dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" (UID: "dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.689791 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-utilities" (OuterVolumeSpecName: "utilities") pod "06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" (UID: "06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.690824 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-kube-api-access-x4njd" (OuterVolumeSpecName: "kube-api-access-x4njd") pod "06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" (UID: "06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d"). InnerVolumeSpecName "kube-api-access-x4njd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.691445 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd2e241-8c70-44a5-bd89-b7bd4523640e-kube-api-access-fdlqg" (OuterVolumeSpecName: "kube-api-access-fdlqg") pod "6bd2e241-8c70-44a5-bd89-b7bd4523640e" (UID: "6bd2e241-8c70-44a5-bd89-b7bd4523640e"). InnerVolumeSpecName "kube-api-access-fdlqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.693184 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2ecabdc0-bd56-4f58-b619-32c52a2ade73" (UID: "2ecabdc0-bd56-4f58-b619-32c52a2ade73"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.693205 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ecabdc0-bd56-4f58-b619-32c52a2ade73-kube-api-access-ctzp4" (OuterVolumeSpecName: "kube-api-access-ctzp4") pod "2ecabdc0-bd56-4f58-b619-32c52a2ade73" (UID: "2ecabdc0-bd56-4f58-b619-32c52a2ade73"). InnerVolumeSpecName "kube-api-access-ctzp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.698403 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-kube-api-access-gtphv" (OuterVolumeSpecName: "kube-api-access-gtphv") pod "8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" (UID: "8d3b4a5d-7af6-45dc-b7b2-bf61d415808a"). InnerVolumeSpecName "kube-api-access-gtphv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.700602 4698 scope.go:117] "RemoveContainer" containerID="acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.704248 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-kube-api-access-558gg" (OuterVolumeSpecName: "kube-api-access-558gg") pod "dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" (UID: "dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f"). InnerVolumeSpecName "kube-api-access-558gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.714254 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bd2e241-8c70-44a5-bd89-b7bd4523640e" (UID: "6bd2e241-8c70-44a5-bd89-b7bd4523640e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.718592 4698 scope.go:117] "RemoveContainer" containerID="d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.720509 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d\": container with ID starting with d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d not found: ID does not exist" containerID="d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.720561 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d"} err="failed to get container status \"d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d\": rpc error: code = NotFound desc = could not find container \"d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d\": container with ID starting with d6066a5f91434010d9850bc1e296f4f8d35a199459495e0f9b8bccb72b062c9d not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.720591 4698 scope.go:117] "RemoveContainer" containerID="c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.720942 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6\": container with ID starting with c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6 not found: ID does not exist" containerID="c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.720990 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6"} err="failed to get container status \"c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6\": rpc error: code = NotFound desc = could not find container \"c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6\": container with ID starting with c17ee04dd2a5a3e9d382bee18e37b995548570dcb6fbb538a12a4cf0340da6b6 not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.721042 4698 scope.go:117] "RemoveContainer" containerID="acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.721707 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a\": container with ID starting with acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a not found: ID does not exist" containerID="acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.721741 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a"} err="failed to get container status \"acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a\": rpc error: code = NotFound desc = could not find container \"acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a\": container with ID starting with acf9bf419b3017fb2f5b7691127959401edf7d4271ff60115391afe3e803ae4a not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.721758 4698 scope.go:117] "RemoveContainer" containerID="16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.742977 4698 scope.go:117] "RemoveContainer" containerID="16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.743603 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb\": container with ID starting with 16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb not found: ID does not exist" containerID="16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.743657 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb"} err="failed to get container status \"16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb\": rpc error: code = NotFound desc = could not find container \"16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb\": container with ID starting with 16691a564458c7b3b0e3734b4ab0ada961fa38587633fb6742b47d8dfc8abbbb not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.743691 4698 scope.go:117] "RemoveContainer" containerID="fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.757376 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbkqv"] Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.760378 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" (UID: "dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.764708 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" (UID: "06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.782411 4698 scope.go:117] "RemoveContainer" containerID="97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.786940 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" (UID: "8d3b4a5d-7af6-45dc-b7b2-bf61d415808a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.787678 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.787767 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.787782 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.787987 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdlqg\" (UniqueName: \"kubernetes.io/projected/6bd2e241-8c70-44a5-bd89-b7bd4523640e-kube-api-access-fdlqg\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.788039 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd2e241-8c70-44a5-bd89-b7bd4523640e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.788204 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4njd\" (UniqueName: \"kubernetes.io/projected/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-kube-api-access-x4njd\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.788235 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtphv\" (UniqueName: \"kubernetes.io/projected/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-kube-api-access-gtphv\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.788288 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.788412 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.788444 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.788724 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.788823 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctzp4\" (UniqueName: \"kubernetes.io/projected/2ecabdc0-bd56-4f58-b619-32c52a2ade73-kube-api-access-ctzp4\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.788962 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2ecabdc0-bd56-4f58-b619-32c52a2ade73-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.789067 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.789145 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-558gg\" (UniqueName: \"kubernetes.io/projected/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f-kube-api-access-558gg\") on node \"crc\" DevicePath \"\"" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.816684 4698 scope.go:117] "RemoveContainer" containerID="7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.841351 4698 scope.go:117] "RemoveContainer" containerID="fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.842249 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652\": container with ID starting with fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652 not found: ID does not exist" containerID="fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.842465 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652"} err="failed to get container status \"fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652\": rpc error: code = NotFound desc = could not find container \"fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652\": container with ID starting with fc4cd3f22a69618426d47b55970b64c6fca7d09d3a63ea93e3d237d23dc97652 not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.842503 4698 scope.go:117] "RemoveContainer" containerID="97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.842905 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e\": container with ID starting with 97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e not found: ID does not exist" containerID="97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.842936 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e"} err="failed to get container status \"97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e\": rpc error: code = NotFound desc = could not find container \"97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e\": container with ID starting with 97a6bc86a92ad7d4e2e1e5255b2c502f59f2df00bd0509ba50ad8260e4d3349e not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.842963 4698 scope.go:117] "RemoveContainer" containerID="7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.843345 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90\": container with ID starting with 7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90 not found: ID does not exist" containerID="7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.843421 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90"} err="failed to get container status \"7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90\": rpc error: code = NotFound desc = could not find container \"7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90\": container with ID starting with 7b10f9b667bc36dc215163269e3f8fdd402063b521916a5b856a9e4f206cff90 not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.843472 4698 scope.go:117] "RemoveContainer" containerID="bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.862920 4698 scope.go:117] "RemoveContainer" containerID="87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.878826 4698 scope.go:117] "RemoveContainer" containerID="243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.893148 4698 scope.go:117] "RemoveContainer" containerID="bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.893472 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7\": container with ID starting with bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7 not found: ID does not exist" containerID="bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.893508 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7"} err="failed to get container status \"bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7\": rpc error: code = NotFound desc = could not find container \"bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7\": container with ID starting with bee2e7b470d8679b3ab59e7c8285d9114c0954462a4612059835614689d819f7 not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.893534 4698 scope.go:117] "RemoveContainer" containerID="87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.893848 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9\": container with ID starting with 87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9 not found: ID does not exist" containerID="87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.893873 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9"} err="failed to get container status \"87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9\": rpc error: code = NotFound desc = could not find container \"87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9\": container with ID starting with 87cf1627dba0e813c7b012030ed11b727798baa4d244a414956638c424dfc7b9 not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.893888 4698 scope.go:117] "RemoveContainer" containerID="243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.894291 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca\": container with ID starting with 243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca not found: ID does not exist" containerID="243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.894308 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca"} err="failed to get container status \"243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca\": rpc error: code = NotFound desc = could not find container \"243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca\": container with ID starting with 243ff392c50b9a0e292012a489aff52e79dd5080ed6c07257b2b4d204b588fca not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.894321 4698 scope.go:117] "RemoveContainer" containerID="337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.913504 4698 scope.go:117] "RemoveContainer" containerID="696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.919229 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hj2gc"] Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.921494 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hj2gc"] Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.943845 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-69fdx"] Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.946880 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-69fdx"] Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.953913 4698 scope.go:117] "RemoveContainer" containerID="5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.985850 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4l5q"] Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.990434 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r4l5q"] Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.992625 4698 scope.go:117] "RemoveContainer" containerID="337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691" Oct 06 11:49:44 crc kubenswrapper[4698]: E1006 11:49:44.995893 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691\": container with ID starting with 337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691 not found: ID does not exist" containerID="337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.995947 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691"} err="failed to get container status \"337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691\": rpc error: code = NotFound desc = could not find container \"337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691\": container with ID starting with 337cf23966d94a6f93f9570c55e03ff9cca5903cb6192420cd0f049755c29691 not found: ID does not exist" Oct 06 11:49:44 crc kubenswrapper[4698]: I1006 11:49:44.995975 4698 scope.go:117] "RemoveContainer" containerID="696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1" Oct 06 11:49:45 crc kubenswrapper[4698]: E1006 11:49:45.002414 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1\": container with ID starting with 696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1 not found: ID does not exist" containerID="696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.002471 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1"} err="failed to get container status \"696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1\": rpc error: code = NotFound desc = could not find container \"696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1\": container with ID starting with 696edc8a2158e83e0ad02b84f1f1a0253501eceed0da1381ac8cdfc5b66fdac1 not found: ID does not exist" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.002500 4698 scope.go:117] "RemoveContainer" containerID="5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f" Oct 06 11:49:45 crc kubenswrapper[4698]: E1006 11:49:45.003134 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f\": container with ID starting with 5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f not found: ID does not exist" containerID="5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.003185 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f"} err="failed to get container status \"5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f\": rpc error: code = NotFound desc = could not find container \"5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f\": container with ID starting with 5f95c736c6c487d320104a51f75dd271349291a2224637d9f87380034a8d880f not found: ID does not exist" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.023699 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ld4fp"] Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.029316 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ld4fp"] Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.031980 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ck5qk"] Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.034124 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ck5qk"] Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.335914 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" path="/var/lib/kubelet/pods/06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d/volumes" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.336935 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ecabdc0-bd56-4f58-b619-32c52a2ade73" path="/var/lib/kubelet/pods/2ecabdc0-bd56-4f58-b619-32c52a2ade73/volumes" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.337454 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" path="/var/lib/kubelet/pods/6bd2e241-8c70-44a5-bd89-b7bd4523640e/volumes" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.338582 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" path="/var/lib/kubelet/pods/8d3b4a5d-7af6-45dc-b7b2-bf61d415808a/volumes" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.339216 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" path="/var/lib/kubelet/pods/dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f/volumes" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.657260 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" event={"ID":"debcb559-cc53-4d24-9eb0-233c76c3cab1","Type":"ContainerStarted","Data":"5df0634c84a1de9121a66ff963c4e286b09dd80613a806bd47dc3056c1de23b4"} Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.657319 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" event={"ID":"debcb559-cc53-4d24-9eb0-233c76c3cab1","Type":"ContainerStarted","Data":"45b2ad3ad30ed21fc91fe9b13f33f1b310826e781c335b914b7e135d8030e5d8"} Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.658052 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.663612 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" Oct 06 11:49:45 crc kubenswrapper[4698]: I1006 11:49:45.677542 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sbkqv" podStartSLOduration=2.677510592 podStartE2EDuration="2.677510592s" podCreationTimestamp="2025-10-06 11:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:49:45.675957033 +0000 UTC m=+273.088649246" watchObservedRunningTime="2025-10-06 11:49:45.677510592 +0000 UTC m=+273.090202765" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.149768 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8vp82"] Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150151 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerName="extract-utilities" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150175 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerName="extract-utilities" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150196 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150209 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150235 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerName="extract-utilities" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150247 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerName="extract-utilities" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150265 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150276 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150296 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerName="extract-content" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150307 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerName="extract-content" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150323 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150334 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150351 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerName="extract-content" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150362 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerName="extract-content" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150378 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerName="extract-utilities" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150388 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerName="extract-utilities" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150401 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ecabdc0-bd56-4f58-b619-32c52a2ade73" containerName="marketplace-operator" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150411 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ecabdc0-bd56-4f58-b619-32c52a2ade73" containerName="marketplace-operator" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150429 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150441 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150457 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerName="extract-utilities" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150468 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerName="extract-utilities" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150482 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerName="extract-content" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150493 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerName="extract-content" Oct 06 11:49:46 crc kubenswrapper[4698]: E1006 11:49:46.150507 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerName="extract-content" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150518 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerName="extract-content" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150674 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ecabdc0-bd56-4f58-b619-32c52a2ade73" containerName="marketplace-operator" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150694 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d3b4a5d-7af6-45dc-b7b2-bf61d415808a" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150718 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd2e241-8c70-44a5-bd89-b7bd4523640e" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150734 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc96eba9-f63b-4d3a-9bd0-c04ad5b1e96f" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.150753 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f9ad29-ff32-4a2d-95c1-95e42e3bfd7d" containerName="registry-server" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.151967 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.154169 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.164544 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vp82"] Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.313699 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189e0eb7-102f-4ba2-ab71-0f5cd231bd2b-utilities\") pod \"redhat-marketplace-8vp82\" (UID: \"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b\") " pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.313778 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189e0eb7-102f-4ba2-ab71-0f5cd231bd2b-catalog-content\") pod \"redhat-marketplace-8vp82\" (UID: \"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b\") " pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.313848 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm7bc\" (UniqueName: \"kubernetes.io/projected/189e0eb7-102f-4ba2-ab71-0f5cd231bd2b-kube-api-access-mm7bc\") pod \"redhat-marketplace-8vp82\" (UID: \"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b\") " pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.349878 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jkbch"] Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.353704 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.365868 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.368872 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jkbch"] Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.414726 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189e0eb7-102f-4ba2-ab71-0f5cd231bd2b-utilities\") pod \"redhat-marketplace-8vp82\" (UID: \"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b\") " pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.414773 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189e0eb7-102f-4ba2-ab71-0f5cd231bd2b-catalog-content\") pod \"redhat-marketplace-8vp82\" (UID: \"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b\") " pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.414804 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-utilities\") pod \"certified-operators-jkbch\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.414837 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm7bc\" (UniqueName: \"kubernetes.io/projected/189e0eb7-102f-4ba2-ab71-0f5cd231bd2b-kube-api-access-mm7bc\") pod \"redhat-marketplace-8vp82\" (UID: \"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b\") " pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.414893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzskb\" (UniqueName: \"kubernetes.io/projected/22124369-4b3f-4da0-923e-2963f119496c-kube-api-access-qzskb\") pod \"certified-operators-jkbch\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.414935 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-catalog-content\") pod \"certified-operators-jkbch\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.415353 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189e0eb7-102f-4ba2-ab71-0f5cd231bd2b-catalog-content\") pod \"redhat-marketplace-8vp82\" (UID: \"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b\") " pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.415438 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189e0eb7-102f-4ba2-ab71-0f5cd231bd2b-utilities\") pod \"redhat-marketplace-8vp82\" (UID: \"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b\") " pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.437132 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm7bc\" (UniqueName: \"kubernetes.io/projected/189e0eb7-102f-4ba2-ab71-0f5cd231bd2b-kube-api-access-mm7bc\") pod \"redhat-marketplace-8vp82\" (UID: \"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b\") " pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.505583 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.515834 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzskb\" (UniqueName: \"kubernetes.io/projected/22124369-4b3f-4da0-923e-2963f119496c-kube-api-access-qzskb\") pod \"certified-operators-jkbch\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.515873 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-catalog-content\") pod \"certified-operators-jkbch\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.515915 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-utilities\") pod \"certified-operators-jkbch\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.516330 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-utilities\") pod \"certified-operators-jkbch\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.516592 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-catalog-content\") pod \"certified-operators-jkbch\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.532457 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzskb\" (UniqueName: \"kubernetes.io/projected/22124369-4b3f-4da0-923e-2963f119496c-kube-api-access-qzskb\") pod \"certified-operators-jkbch\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.685717 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.746552 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vp82"] Oct 06 11:49:46 crc kubenswrapper[4698]: W1006 11:49:46.760205 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod189e0eb7_102f_4ba2_ab71_0f5cd231bd2b.slice/crio-bfc972470f92e3b63113f47ad6e904af3c5bdec6fb087e56004c6c29399356e0 WatchSource:0}: Error finding container bfc972470f92e3b63113f47ad6e904af3c5bdec6fb087e56004c6c29399356e0: Status 404 returned error can't find the container with id bfc972470f92e3b63113f47ad6e904af3c5bdec6fb087e56004c6c29399356e0 Oct 06 11:49:46 crc kubenswrapper[4698]: I1006 11:49:46.904113 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jkbch"] Oct 06 11:49:46 crc kubenswrapper[4698]: W1006 11:49:46.906808 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22124369_4b3f_4da0_923e_2963f119496c.slice/crio-2612642ae33fa79c063b96c6dd619a735cb3fc417ae70cf35cacdda1c2e3a24c WatchSource:0}: Error finding container 2612642ae33fa79c063b96c6dd619a735cb3fc417ae70cf35cacdda1c2e3a24c: Status 404 returned error can't find the container with id 2612642ae33fa79c063b96c6dd619a735cb3fc417ae70cf35cacdda1c2e3a24c Oct 06 11:49:47 crc kubenswrapper[4698]: I1006 11:49:47.686961 4698 generic.go:334] "Generic (PLEG): container finished" podID="189e0eb7-102f-4ba2-ab71-0f5cd231bd2b" containerID="112583939dc3c7f4119341c56ed79e73858871fa852eb4728dc1d5fc98c13fe6" exitCode=0 Oct 06 11:49:47 crc kubenswrapper[4698]: I1006 11:49:47.687095 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vp82" event={"ID":"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b","Type":"ContainerDied","Data":"112583939dc3c7f4119341c56ed79e73858871fa852eb4728dc1d5fc98c13fe6"} Oct 06 11:49:47 crc kubenswrapper[4698]: I1006 11:49:47.687188 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vp82" event={"ID":"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b","Type":"ContainerStarted","Data":"bfc972470f92e3b63113f47ad6e904af3c5bdec6fb087e56004c6c29399356e0"} Oct 06 11:49:47 crc kubenswrapper[4698]: I1006 11:49:47.689004 4698 generic.go:334] "Generic (PLEG): container finished" podID="22124369-4b3f-4da0-923e-2963f119496c" containerID="b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992" exitCode=0 Oct 06 11:49:47 crc kubenswrapper[4698]: I1006 11:49:47.689704 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkbch" event={"ID":"22124369-4b3f-4da0-923e-2963f119496c","Type":"ContainerDied","Data":"b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992"} Oct 06 11:49:47 crc kubenswrapper[4698]: I1006 11:49:47.689814 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkbch" event={"ID":"22124369-4b3f-4da0-923e-2963f119496c","Type":"ContainerStarted","Data":"2612642ae33fa79c063b96c6dd619a735cb3fc417ae70cf35cacdda1c2e3a24c"} Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.556510 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rrh74"] Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.562206 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.565228 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.570680 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrh74"] Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.759055 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7vf9r"] Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.761900 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.761898 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00ad13c-4719-46f8-883a-8bf6f03180ca-utilities\") pod \"redhat-operators-rrh74\" (UID: \"e00ad13c-4719-46f8-883a-8bf6f03180ca\") " pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.762510 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x28th\" (UniqueName: \"kubernetes.io/projected/e00ad13c-4719-46f8-883a-8bf6f03180ca-kube-api-access-x28th\") pod \"redhat-operators-rrh74\" (UID: \"e00ad13c-4719-46f8-883a-8bf6f03180ca\") " pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.762944 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00ad13c-4719-46f8-883a-8bf6f03180ca-catalog-content\") pod \"redhat-operators-rrh74\" (UID: \"e00ad13c-4719-46f8-883a-8bf6f03180ca\") " pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.765318 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.785784 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7vf9r"] Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.864880 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50429998-15ac-4de9-b112-c6fb17e9dd18-catalog-content\") pod \"community-operators-7vf9r\" (UID: \"50429998-15ac-4de9-b112-c6fb17e9dd18\") " pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.864957 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00ad13c-4719-46f8-883a-8bf6f03180ca-catalog-content\") pod \"redhat-operators-rrh74\" (UID: \"e00ad13c-4719-46f8-883a-8bf6f03180ca\") " pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.865000 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlbhs\" (UniqueName: \"kubernetes.io/projected/50429998-15ac-4de9-b112-c6fb17e9dd18-kube-api-access-hlbhs\") pod \"community-operators-7vf9r\" (UID: \"50429998-15ac-4de9-b112-c6fb17e9dd18\") " pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.865124 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00ad13c-4719-46f8-883a-8bf6f03180ca-utilities\") pod \"redhat-operators-rrh74\" (UID: \"e00ad13c-4719-46f8-883a-8bf6f03180ca\") " pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.865157 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50429998-15ac-4de9-b112-c6fb17e9dd18-utilities\") pod \"community-operators-7vf9r\" (UID: \"50429998-15ac-4de9-b112-c6fb17e9dd18\") " pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.865194 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x28th\" (UniqueName: \"kubernetes.io/projected/e00ad13c-4719-46f8-883a-8bf6f03180ca-kube-api-access-x28th\") pod \"redhat-operators-rrh74\" (UID: \"e00ad13c-4719-46f8-883a-8bf6f03180ca\") " pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.866207 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00ad13c-4719-46f8-883a-8bf6f03180ca-catalog-content\") pod \"redhat-operators-rrh74\" (UID: \"e00ad13c-4719-46f8-883a-8bf6f03180ca\") " pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.866253 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00ad13c-4719-46f8-883a-8bf6f03180ca-utilities\") pod \"redhat-operators-rrh74\" (UID: \"e00ad13c-4719-46f8-883a-8bf6f03180ca\") " pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.904596 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x28th\" (UniqueName: \"kubernetes.io/projected/e00ad13c-4719-46f8-883a-8bf6f03180ca-kube-api-access-x28th\") pod \"redhat-operators-rrh74\" (UID: \"e00ad13c-4719-46f8-883a-8bf6f03180ca\") " pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.907734 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.966923 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50429998-15ac-4de9-b112-c6fb17e9dd18-utilities\") pod \"community-operators-7vf9r\" (UID: \"50429998-15ac-4de9-b112-c6fb17e9dd18\") " pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.967051 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50429998-15ac-4de9-b112-c6fb17e9dd18-catalog-content\") pod \"community-operators-7vf9r\" (UID: \"50429998-15ac-4de9-b112-c6fb17e9dd18\") " pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.967106 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlbhs\" (UniqueName: \"kubernetes.io/projected/50429998-15ac-4de9-b112-c6fb17e9dd18-kube-api-access-hlbhs\") pod \"community-operators-7vf9r\" (UID: \"50429998-15ac-4de9-b112-c6fb17e9dd18\") " pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.967670 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50429998-15ac-4de9-b112-c6fb17e9dd18-catalog-content\") pod \"community-operators-7vf9r\" (UID: \"50429998-15ac-4de9-b112-c6fb17e9dd18\") " pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.967953 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50429998-15ac-4de9-b112-c6fb17e9dd18-utilities\") pod \"community-operators-7vf9r\" (UID: \"50429998-15ac-4de9-b112-c6fb17e9dd18\") " pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:48 crc kubenswrapper[4698]: I1006 11:49:48.984610 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlbhs\" (UniqueName: \"kubernetes.io/projected/50429998-15ac-4de9-b112-c6fb17e9dd18-kube-api-access-hlbhs\") pod \"community-operators-7vf9r\" (UID: \"50429998-15ac-4de9-b112-c6fb17e9dd18\") " pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.092637 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.418285 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrh74"] Oct 06 11:49:49 crc kubenswrapper[4698]: W1006 11:49:49.421663 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode00ad13c_4719_46f8_883a_8bf6f03180ca.slice/crio-41d03c0ab84d200d9c324ead30f1c288f3c463cbba67269628f98fe3baf79318 WatchSource:0}: Error finding container 41d03c0ab84d200d9c324ead30f1c288f3c463cbba67269628f98fe3baf79318: Status 404 returned error can't find the container with id 41d03c0ab84d200d9c324ead30f1c288f3c463cbba67269628f98fe3baf79318 Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.493006 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7vf9r"] Oct 06 11:49:49 crc kubenswrapper[4698]: W1006 11:49:49.615715 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50429998_15ac_4de9_b112_c6fb17e9dd18.slice/crio-d78b5717fb3deca73e7d2bf89cbe716e6973ff1bc1da02b753804c7933d7d389 WatchSource:0}: Error finding container d78b5717fb3deca73e7d2bf89cbe716e6973ff1bc1da02b753804c7933d7d389: Status 404 returned error can't find the container with id d78b5717fb3deca73e7d2bf89cbe716e6973ff1bc1da02b753804c7933d7d389 Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.702531 4698 generic.go:334] "Generic (PLEG): container finished" podID="189e0eb7-102f-4ba2-ab71-0f5cd231bd2b" containerID="32a8317fcc13d25c9359ab222c696b437ec0b0b863ff4e9bbc6e8710c317346e" exitCode=0 Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.702617 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vp82" event={"ID":"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b","Type":"ContainerDied","Data":"32a8317fcc13d25c9359ab222c696b437ec0b0b863ff4e9bbc6e8710c317346e"} Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.704625 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vf9r" event={"ID":"50429998-15ac-4de9-b112-c6fb17e9dd18","Type":"ContainerStarted","Data":"d78b5717fb3deca73e7d2bf89cbe716e6973ff1bc1da02b753804c7933d7d389"} Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.712631 4698 generic.go:334] "Generic (PLEG): container finished" podID="22124369-4b3f-4da0-923e-2963f119496c" containerID="8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d" exitCode=0 Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.712698 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkbch" event={"ID":"22124369-4b3f-4da0-923e-2963f119496c","Type":"ContainerDied","Data":"8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d"} Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.717842 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrh74" event={"ID":"e00ad13c-4719-46f8-883a-8bf6f03180ca","Type":"ContainerDied","Data":"ed4b4ed5f9119c4f646f185ed8ed0aa8c46e1245cb46704cca736a32d9f874df"} Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.717533 4698 generic.go:334] "Generic (PLEG): container finished" podID="e00ad13c-4719-46f8-883a-8bf6f03180ca" containerID="ed4b4ed5f9119c4f646f185ed8ed0aa8c46e1245cb46704cca736a32d9f874df" exitCode=0 Oct 06 11:49:49 crc kubenswrapper[4698]: I1006 11:49:49.718199 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrh74" event={"ID":"e00ad13c-4719-46f8-883a-8bf6f03180ca","Type":"ContainerStarted","Data":"41d03c0ab84d200d9c324ead30f1c288f3c463cbba67269628f98fe3baf79318"} Oct 06 11:49:50 crc kubenswrapper[4698]: I1006 11:49:50.726585 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkbch" event={"ID":"22124369-4b3f-4da0-923e-2963f119496c","Type":"ContainerStarted","Data":"c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713"} Oct 06 11:49:50 crc kubenswrapper[4698]: I1006 11:49:50.730681 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vp82" event={"ID":"189e0eb7-102f-4ba2-ab71-0f5cd231bd2b","Type":"ContainerStarted","Data":"f8cbd5b54f5143bfc279153c53f059082d790a49b9cad1e42dfb98fbb831efc4"} Oct 06 11:49:50 crc kubenswrapper[4698]: I1006 11:49:50.732436 4698 generic.go:334] "Generic (PLEG): container finished" podID="50429998-15ac-4de9-b112-c6fb17e9dd18" containerID="51ccf575618a36e2e2fb92c0bf5702bd827338b29ae7e872f8f0bb536b2552b4" exitCode=0 Oct 06 11:49:50 crc kubenswrapper[4698]: I1006 11:49:50.732512 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vf9r" event={"ID":"50429998-15ac-4de9-b112-c6fb17e9dd18","Type":"ContainerDied","Data":"51ccf575618a36e2e2fb92c0bf5702bd827338b29ae7e872f8f0bb536b2552b4"} Oct 06 11:49:50 crc kubenswrapper[4698]: I1006 11:49:50.748233 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jkbch" podStartSLOduration=2.267492104 podStartE2EDuration="4.748214196s" podCreationTimestamp="2025-10-06 11:49:46 +0000 UTC" firstStartedPulling="2025-10-06 11:49:47.691538821 +0000 UTC m=+275.104230994" lastFinishedPulling="2025-10-06 11:49:50.172260873 +0000 UTC m=+277.584953086" observedRunningTime="2025-10-06 11:49:50.744588033 +0000 UTC m=+278.157280206" watchObservedRunningTime="2025-10-06 11:49:50.748214196 +0000 UTC m=+278.160906369" Oct 06 11:49:50 crc kubenswrapper[4698]: I1006 11:49:50.785832 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8vp82" podStartSLOduration=2.1271068890000002 podStartE2EDuration="4.78581268s" podCreationTimestamp="2025-10-06 11:49:46 +0000 UTC" firstStartedPulling="2025-10-06 11:49:47.688988436 +0000 UTC m=+275.101680609" lastFinishedPulling="2025-10-06 11:49:50.347694187 +0000 UTC m=+277.760386400" observedRunningTime="2025-10-06 11:49:50.785223044 +0000 UTC m=+278.197915297" watchObservedRunningTime="2025-10-06 11:49:50.78581268 +0000 UTC m=+278.198504853" Oct 06 11:49:51 crc kubenswrapper[4698]: I1006 11:49:51.741919 4698 generic.go:334] "Generic (PLEG): container finished" podID="e00ad13c-4719-46f8-883a-8bf6f03180ca" containerID="0fbbe61b4e25a570b89b1fd2399c5d172a712dd009d922f26a21ba360af5d841" exitCode=0 Oct 06 11:49:51 crc kubenswrapper[4698]: I1006 11:49:51.742039 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrh74" event={"ID":"e00ad13c-4719-46f8-883a-8bf6f03180ca","Type":"ContainerDied","Data":"0fbbe61b4e25a570b89b1fd2399c5d172a712dd009d922f26a21ba360af5d841"} Oct 06 11:49:51 crc kubenswrapper[4698]: I1006 11:49:51.746333 4698 generic.go:334] "Generic (PLEG): container finished" podID="50429998-15ac-4de9-b112-c6fb17e9dd18" containerID="217524946c73c84c0f1e35071782a9b01ecc586ddc5165c8b9cc18bb620630f5" exitCode=0 Oct 06 11:49:51 crc kubenswrapper[4698]: I1006 11:49:51.746514 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vf9r" event={"ID":"50429998-15ac-4de9-b112-c6fb17e9dd18","Type":"ContainerDied","Data":"217524946c73c84c0f1e35071782a9b01ecc586ddc5165c8b9cc18bb620630f5"} Oct 06 11:49:53 crc kubenswrapper[4698]: I1006 11:49:53.770261 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrh74" event={"ID":"e00ad13c-4719-46f8-883a-8bf6f03180ca","Type":"ContainerStarted","Data":"714032b34025738f4014b7973353a7d2ff6b470c4edce9ddcccd47a5699b47b0"} Oct 06 11:49:53 crc kubenswrapper[4698]: I1006 11:49:53.773548 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vf9r" event={"ID":"50429998-15ac-4de9-b112-c6fb17e9dd18","Type":"ContainerStarted","Data":"9775012ff094204a3d04e0766581ce321248b5219667d2f78af5a0d6c2b9f831"} Oct 06 11:49:53 crc kubenswrapper[4698]: I1006 11:49:53.797851 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rrh74" podStartSLOduration=3.406222441 podStartE2EDuration="5.797823631s" podCreationTimestamp="2025-10-06 11:49:48 +0000 UTC" firstStartedPulling="2025-10-06 11:49:49.722491362 +0000 UTC m=+277.135183575" lastFinishedPulling="2025-10-06 11:49:52.114092582 +0000 UTC m=+279.526784765" observedRunningTime="2025-10-06 11:49:53.796955748 +0000 UTC m=+281.209647921" watchObservedRunningTime="2025-10-06 11:49:53.797823631 +0000 UTC m=+281.210515804" Oct 06 11:49:53 crc kubenswrapper[4698]: I1006 11:49:53.828638 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7vf9r" podStartSLOduration=4.418703506 podStartE2EDuration="5.82861602s" podCreationTimestamp="2025-10-06 11:49:48 +0000 UTC" firstStartedPulling="2025-10-06 11:49:50.734176897 +0000 UTC m=+278.146869070" lastFinishedPulling="2025-10-06 11:49:52.144089401 +0000 UTC m=+279.556781584" observedRunningTime="2025-10-06 11:49:53.827110421 +0000 UTC m=+281.239802614" watchObservedRunningTime="2025-10-06 11:49:53.82861602 +0000 UTC m=+281.241308203" Oct 06 11:49:56 crc kubenswrapper[4698]: I1006 11:49:56.506274 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:56 crc kubenswrapper[4698]: I1006 11:49:56.506669 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:56 crc kubenswrapper[4698]: I1006 11:49:56.565369 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:56 crc kubenswrapper[4698]: I1006 11:49:56.687029 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:56 crc kubenswrapper[4698]: I1006 11:49:56.688382 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:56 crc kubenswrapper[4698]: I1006 11:49:56.741749 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:56 crc kubenswrapper[4698]: I1006 11:49:56.848214 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8vp82" Oct 06 11:49:56 crc kubenswrapper[4698]: I1006 11:49:56.865467 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:49:58 crc kubenswrapper[4698]: I1006 11:49:58.908212 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:58 crc kubenswrapper[4698]: I1006 11:49:58.909152 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:58 crc kubenswrapper[4698]: I1006 11:49:58.972609 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:59 crc kubenswrapper[4698]: I1006 11:49:59.093166 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:59 crc kubenswrapper[4698]: I1006 11:49:59.093557 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:59 crc kubenswrapper[4698]: I1006 11:49:59.139147 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:49:59 crc kubenswrapper[4698]: I1006 11:49:59.872709 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rrh74" Oct 06 11:49:59 crc kubenswrapper[4698]: I1006 11:49:59.899388 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7vf9r" Oct 06 11:50:55 crc kubenswrapper[4698]: I1006 11:50:55.235779 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:50:55 crc kubenswrapper[4698]: I1006 11:50:55.236984 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:51:25 crc kubenswrapper[4698]: I1006 11:51:25.235559 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:51:25 crc kubenswrapper[4698]: I1006 11:51:25.236288 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:51:55 crc kubenswrapper[4698]: I1006 11:51:55.235791 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:51:55 crc kubenswrapper[4698]: I1006 11:51:55.236658 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:51:55 crc kubenswrapper[4698]: I1006 11:51:55.236731 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:51:55 crc kubenswrapper[4698]: I1006 11:51:55.237668 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b716722665ea296eaba31821da2396c6318752207c56c9a3dc888521bc6f3be5"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 11:51:55 crc kubenswrapper[4698]: I1006 11:51:55.237773 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://b716722665ea296eaba31821da2396c6318752207c56c9a3dc888521bc6f3be5" gracePeriod=600 Oct 06 11:51:55 crc kubenswrapper[4698]: I1006 11:51:55.673836 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="b716722665ea296eaba31821da2396c6318752207c56c9a3dc888521bc6f3be5" exitCode=0 Oct 06 11:51:55 crc kubenswrapper[4698]: I1006 11:51:55.673984 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"b716722665ea296eaba31821da2396c6318752207c56c9a3dc888521bc6f3be5"} Oct 06 11:51:55 crc kubenswrapper[4698]: I1006 11:51:55.674667 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"f740d4fc1c100903d5d67499ea0988ec53f44f5a5265fcfef24c778de7e4fd14"} Oct 06 11:51:55 crc kubenswrapper[4698]: I1006 11:51:55.674738 4698 scope.go:117] "RemoveContainer" containerID="ceebe1dec8358bc8220156943410cc1cfe6da98b95752432afd682f49a6ea42b" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.561858 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-b7wng"] Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.564570 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.594917 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-b7wng"] Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.691228 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.691308 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/886780d2-ad5c-4d78-9cf3-b717e5eede36-installation-pull-secrets\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.691388 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/886780d2-ad5c-4d78-9cf3-b717e5eede36-ca-trust-extracted\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.691484 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/886780d2-ad5c-4d78-9cf3-b717e5eede36-bound-sa-token\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.691507 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj6nv\" (UniqueName: \"kubernetes.io/projected/886780d2-ad5c-4d78-9cf3-b717e5eede36-kube-api-access-sj6nv\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.691526 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/886780d2-ad5c-4d78-9cf3-b717e5eede36-registry-certificates\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.691541 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/886780d2-ad5c-4d78-9cf3-b717e5eede36-registry-tls\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.691575 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/886780d2-ad5c-4d78-9cf3-b717e5eede36-trusted-ca\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.735353 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.792632 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/886780d2-ad5c-4d78-9cf3-b717e5eede36-trusted-ca\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.792703 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/886780d2-ad5c-4d78-9cf3-b717e5eede36-installation-pull-secrets\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.792765 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/886780d2-ad5c-4d78-9cf3-b717e5eede36-ca-trust-extracted\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.792855 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/886780d2-ad5c-4d78-9cf3-b717e5eede36-bound-sa-token\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.792894 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj6nv\" (UniqueName: \"kubernetes.io/projected/886780d2-ad5c-4d78-9cf3-b717e5eede36-kube-api-access-sj6nv\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.792928 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/886780d2-ad5c-4d78-9cf3-b717e5eede36-registry-tls\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.792961 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/886780d2-ad5c-4d78-9cf3-b717e5eede36-registry-certificates\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.794220 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/886780d2-ad5c-4d78-9cf3-b717e5eede36-ca-trust-extracted\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.794901 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/886780d2-ad5c-4d78-9cf3-b717e5eede36-registry-certificates\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.794977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/886780d2-ad5c-4d78-9cf3-b717e5eede36-trusted-ca\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.804697 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/886780d2-ad5c-4d78-9cf3-b717e5eede36-registry-tls\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.807800 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/886780d2-ad5c-4d78-9cf3-b717e5eede36-installation-pull-secrets\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.817438 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/886780d2-ad5c-4d78-9cf3-b717e5eede36-bound-sa-token\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.817709 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj6nv\" (UniqueName: \"kubernetes.io/projected/886780d2-ad5c-4d78-9cf3-b717e5eede36-kube-api-access-sj6nv\") pod \"image-registry-66df7c8f76-b7wng\" (UID: \"886780d2-ad5c-4d78-9cf3-b717e5eede36\") " pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:29 crc kubenswrapper[4698]: I1006 11:52:29.901601 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:30 crc kubenswrapper[4698]: I1006 11:52:30.178985 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-b7wng"] Oct 06 11:52:30 crc kubenswrapper[4698]: I1006 11:52:30.973319 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" event={"ID":"886780d2-ad5c-4d78-9cf3-b717e5eede36","Type":"ContainerStarted","Data":"9bc4d297c3b782cf99d2d1fde5b05ab12e728c074a60bd059f1026d2d1f20a60"} Oct 06 11:52:30 crc kubenswrapper[4698]: I1006 11:52:30.973802 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:30 crc kubenswrapper[4698]: I1006 11:52:30.973826 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" event={"ID":"886780d2-ad5c-4d78-9cf3-b717e5eede36","Type":"ContainerStarted","Data":"8d5195cd0f533ce005b715a4ca84746b5a99c00d4b43ff9f502904f6c3c25c1c"} Oct 06 11:52:31 crc kubenswrapper[4698]: I1006 11:52:31.007857 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" podStartSLOduration=2.007821961 podStartE2EDuration="2.007821961s" podCreationTimestamp="2025-10-06 11:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:52:31.002345201 +0000 UTC m=+438.415037404" watchObservedRunningTime="2025-10-06 11:52:31.007821961 +0000 UTC m=+438.420514174" Oct 06 11:52:49 crc kubenswrapper[4698]: I1006 11:52:49.909306 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-b7wng" Oct 06 11:52:49 crc kubenswrapper[4698]: I1006 11:52:49.979986 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9tmxl"] Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.034735 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" podUID="ecbf158d-99db-46c0-84e8-a71879e9f56f" containerName="registry" containerID="cri-o://5498b90298218554ccf878db4bca9944b35e346a4e435a7e87d7940d5e748bda" gracePeriod=30 Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.331537 4698 generic.go:334] "Generic (PLEG): container finished" podID="ecbf158d-99db-46c0-84e8-a71879e9f56f" containerID="5498b90298218554ccf878db4bca9944b35e346a4e435a7e87d7940d5e748bda" exitCode=0 Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.345378 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" event={"ID":"ecbf158d-99db-46c0-84e8-a71879e9f56f","Type":"ContainerDied","Data":"5498b90298218554ccf878db4bca9944b35e346a4e435a7e87d7940d5e748bda"} Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.455744 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.483117 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-trusted-ca\") pod \"ecbf158d-99db-46c0-84e8-a71879e9f56f\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.483245 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-certificates\") pod \"ecbf158d-99db-46c0-84e8-a71879e9f56f\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.483325 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvvzh\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-kube-api-access-lvvzh\") pod \"ecbf158d-99db-46c0-84e8-a71879e9f56f\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.483354 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-bound-sa-token\") pod \"ecbf158d-99db-46c0-84e8-a71879e9f56f\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.483571 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ecbf158d-99db-46c0-84e8-a71879e9f56f\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.483688 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ecbf158d-99db-46c0-84e8-a71879e9f56f-ca-trust-extracted\") pod \"ecbf158d-99db-46c0-84e8-a71879e9f56f\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.483719 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-tls\") pod \"ecbf158d-99db-46c0-84e8-a71879e9f56f\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.483765 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ecbf158d-99db-46c0-84e8-a71879e9f56f-installation-pull-secrets\") pod \"ecbf158d-99db-46c0-84e8-a71879e9f56f\" (UID: \"ecbf158d-99db-46c0-84e8-a71879e9f56f\") " Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.485474 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ecbf158d-99db-46c0-84e8-a71879e9f56f" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.485652 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ecbf158d-99db-46c0-84e8-a71879e9f56f" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.505606 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbf158d-99db-46c0-84e8-a71879e9f56f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ecbf158d-99db-46c0-84e8-a71879e9f56f" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.505608 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-kube-api-access-lvvzh" (OuterVolumeSpecName: "kube-api-access-lvvzh") pod "ecbf158d-99db-46c0-84e8-a71879e9f56f" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f"). InnerVolumeSpecName "kube-api-access-lvvzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.506087 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ecbf158d-99db-46c0-84e8-a71879e9f56f" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.509348 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbf158d-99db-46c0-84e8-a71879e9f56f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ecbf158d-99db-46c0-84e8-a71879e9f56f" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.509442 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ecbf158d-99db-46c0-84e8-a71879e9f56f" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.511507 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ecbf158d-99db-46c0-84e8-a71879e9f56f" (UID: "ecbf158d-99db-46c0-84e8-a71879e9f56f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.585522 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvvzh\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-kube-api-access-lvvzh\") on node \"crc\" DevicePath \"\"" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.585571 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.585584 4698 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ecbf158d-99db-46c0-84e8-a71879e9f56f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.585598 4698 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.585612 4698 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ecbf158d-99db-46c0-84e8-a71879e9f56f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.585625 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:53:15 crc kubenswrapper[4698]: I1006 11:53:15.585639 4698 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ecbf158d-99db-46c0-84e8-a71879e9f56f-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 06 11:53:16 crc kubenswrapper[4698]: I1006 11:53:16.343893 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" event={"ID":"ecbf158d-99db-46c0-84e8-a71879e9f56f","Type":"ContainerDied","Data":"5ac9497d615353b727252ff84bda533ab49f61603c9f48dc0489c62f53c018d8"} Oct 06 11:53:16 crc kubenswrapper[4698]: I1006 11:53:16.343988 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9tmxl" Oct 06 11:53:16 crc kubenswrapper[4698]: I1006 11:53:16.344001 4698 scope.go:117] "RemoveContainer" containerID="5498b90298218554ccf878db4bca9944b35e346a4e435a7e87d7940d5e748bda" Oct 06 11:53:16 crc kubenswrapper[4698]: I1006 11:53:16.391851 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9tmxl"] Oct 06 11:53:16 crc kubenswrapper[4698]: I1006 11:53:16.398148 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9tmxl"] Oct 06 11:53:17 crc kubenswrapper[4698]: I1006 11:53:17.340396 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecbf158d-99db-46c0-84e8-a71879e9f56f" path="/var/lib/kubelet/pods/ecbf158d-99db-46c0-84e8-a71879e9f56f/volumes" Oct 06 11:53:55 crc kubenswrapper[4698]: I1006 11:53:55.235275 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:53:55 crc kubenswrapper[4698]: I1006 11:53:55.236214 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:54:25 crc kubenswrapper[4698]: I1006 11:54:25.235823 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:54:25 crc kubenswrapper[4698]: I1006 11:54:25.236806 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.731985 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-xncst"] Oct 06 11:54:50 crc kubenswrapper[4698]: E1006 11:54:50.733147 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbf158d-99db-46c0-84e8-a71879e9f56f" containerName="registry" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.733165 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbf158d-99db-46c0-84e8-a71879e9f56f" containerName="registry" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.733267 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbf158d-99db-46c0-84e8-a71879e9f56f" containerName="registry" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.734265 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-xncst" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.736705 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.736980 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4c2v6" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.737153 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.744816 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wpvgh"] Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.745669 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-wpvgh" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.747740 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-p99rv" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.753234 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-xncst"] Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.765649 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wpvgh"] Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.790134 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-sbjmt"] Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.791321 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-sbjmt" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.793305 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tlnz7" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.808393 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-sbjmt"] Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.819613 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlq6\" (UniqueName: \"kubernetes.io/projected/b73b818b-7d2e-4c3f-9622-77ee5c1fc72d-kube-api-access-7jlq6\") pod \"cert-manager-cainjector-7f985d654d-xncst\" (UID: \"b73b818b-7d2e-4c3f-9622-77ee5c1fc72d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-xncst" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.819700 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjg7\" (UniqueName: \"kubernetes.io/projected/be67c15a-01a0-435f-995b-f61cd109d8c8-kube-api-access-ngjg7\") pod \"cert-manager-webhook-5655c58dd6-sbjmt\" (UID: \"be67c15a-01a0-435f-995b-f61cd109d8c8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-sbjmt" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.819881 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrgs\" (UniqueName: \"kubernetes.io/projected/c57ea6be-96d1-4d4f-8c49-94ee240a5482-kube-api-access-slrgs\") pod \"cert-manager-5b446d88c5-wpvgh\" (UID: \"c57ea6be-96d1-4d4f-8c49-94ee240a5482\") " pod="cert-manager/cert-manager-5b446d88c5-wpvgh" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.920984 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jlq6\" (UniqueName: \"kubernetes.io/projected/b73b818b-7d2e-4c3f-9622-77ee5c1fc72d-kube-api-access-7jlq6\") pod \"cert-manager-cainjector-7f985d654d-xncst\" (UID: \"b73b818b-7d2e-4c3f-9622-77ee5c1fc72d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-xncst" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.921071 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjg7\" (UniqueName: \"kubernetes.io/projected/be67c15a-01a0-435f-995b-f61cd109d8c8-kube-api-access-ngjg7\") pod \"cert-manager-webhook-5655c58dd6-sbjmt\" (UID: \"be67c15a-01a0-435f-995b-f61cd109d8c8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-sbjmt" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.921099 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slrgs\" (UniqueName: \"kubernetes.io/projected/c57ea6be-96d1-4d4f-8c49-94ee240a5482-kube-api-access-slrgs\") pod \"cert-manager-5b446d88c5-wpvgh\" (UID: \"c57ea6be-96d1-4d4f-8c49-94ee240a5482\") " pod="cert-manager/cert-manager-5b446d88c5-wpvgh" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.943423 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jlq6\" (UniqueName: \"kubernetes.io/projected/b73b818b-7d2e-4c3f-9622-77ee5c1fc72d-kube-api-access-7jlq6\") pod \"cert-manager-cainjector-7f985d654d-xncst\" (UID: \"b73b818b-7d2e-4c3f-9622-77ee5c1fc72d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-xncst" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.945889 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjg7\" (UniqueName: \"kubernetes.io/projected/be67c15a-01a0-435f-995b-f61cd109d8c8-kube-api-access-ngjg7\") pod \"cert-manager-webhook-5655c58dd6-sbjmt\" (UID: \"be67c15a-01a0-435f-995b-f61cd109d8c8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-sbjmt" Oct 06 11:54:50 crc kubenswrapper[4698]: I1006 11:54:50.945992 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slrgs\" (UniqueName: \"kubernetes.io/projected/c57ea6be-96d1-4d4f-8c49-94ee240a5482-kube-api-access-slrgs\") pod \"cert-manager-5b446d88c5-wpvgh\" (UID: \"c57ea6be-96d1-4d4f-8c49-94ee240a5482\") " pod="cert-manager/cert-manager-5b446d88c5-wpvgh" Oct 06 11:54:51 crc kubenswrapper[4698]: I1006 11:54:51.062649 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-xncst" Oct 06 11:54:51 crc kubenswrapper[4698]: I1006 11:54:51.069325 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-wpvgh" Oct 06 11:54:51 crc kubenswrapper[4698]: I1006 11:54:51.123956 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-sbjmt" Oct 06 11:54:51 crc kubenswrapper[4698]: I1006 11:54:51.434589 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-sbjmt"] Oct 06 11:54:51 crc kubenswrapper[4698]: I1006 11:54:51.451834 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 11:54:51 crc kubenswrapper[4698]: I1006 11:54:51.552284 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-xncst"] Oct 06 11:54:51 crc kubenswrapper[4698]: I1006 11:54:51.555514 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wpvgh"] Oct 06 11:54:51 crc kubenswrapper[4698]: W1006 11:54:51.558288 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc57ea6be_96d1_4d4f_8c49_94ee240a5482.slice/crio-614896f348d9700b11096b52a15f1831415beb1eee0eb14a9ec581db17c23f2c WatchSource:0}: Error finding container 614896f348d9700b11096b52a15f1831415beb1eee0eb14a9ec581db17c23f2c: Status 404 returned error can't find the container with id 614896f348d9700b11096b52a15f1831415beb1eee0eb14a9ec581db17c23f2c Oct 06 11:54:51 crc kubenswrapper[4698]: W1006 11:54:51.560685 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb73b818b_7d2e_4c3f_9622_77ee5c1fc72d.slice/crio-bd5c4b0de682812c009fa66ba6be0d6539bbeb7cd6190433ae22d3614c174ae9 WatchSource:0}: Error finding container bd5c4b0de682812c009fa66ba6be0d6539bbeb7cd6190433ae22d3614c174ae9: Status 404 returned error can't find the container with id bd5c4b0de682812c009fa66ba6be0d6539bbeb7cd6190433ae22d3614c174ae9 Oct 06 11:54:52 crc kubenswrapper[4698]: I1006 11:54:52.037855 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-xncst" event={"ID":"b73b818b-7d2e-4c3f-9622-77ee5c1fc72d","Type":"ContainerStarted","Data":"bd5c4b0de682812c009fa66ba6be0d6539bbeb7cd6190433ae22d3614c174ae9"} Oct 06 11:54:52 crc kubenswrapper[4698]: I1006 11:54:52.040261 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-sbjmt" event={"ID":"be67c15a-01a0-435f-995b-f61cd109d8c8","Type":"ContainerStarted","Data":"250b781619231ae9beb17e8a8f28b38da568999f3eee6ed01cea06b6135efd29"} Oct 06 11:54:52 crc kubenswrapper[4698]: I1006 11:54:52.041791 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-wpvgh" event={"ID":"c57ea6be-96d1-4d4f-8c49-94ee240a5482","Type":"ContainerStarted","Data":"614896f348d9700b11096b52a15f1831415beb1eee0eb14a9ec581db17c23f2c"} Oct 06 11:54:55 crc kubenswrapper[4698]: I1006 11:54:55.235252 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:54:55 crc kubenswrapper[4698]: I1006 11:54:55.235891 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:54:55 crc kubenswrapper[4698]: I1006 11:54:55.235971 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:54:55 crc kubenswrapper[4698]: I1006 11:54:55.236682 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f740d4fc1c100903d5d67499ea0988ec53f44f5a5265fcfef24c778de7e4fd14"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 11:54:55 crc kubenswrapper[4698]: I1006 11:54:55.236736 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://f740d4fc1c100903d5d67499ea0988ec53f44f5a5265fcfef24c778de7e4fd14" gracePeriod=600 Oct 06 11:54:56 crc kubenswrapper[4698]: I1006 11:54:56.068930 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="f740d4fc1c100903d5d67499ea0988ec53f44f5a5265fcfef24c778de7e4fd14" exitCode=0 Oct 06 11:54:56 crc kubenswrapper[4698]: I1006 11:54:56.068996 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"f740d4fc1c100903d5d67499ea0988ec53f44f5a5265fcfef24c778de7e4fd14"} Oct 06 11:54:56 crc kubenswrapper[4698]: I1006 11:54:56.069837 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"8a16d893c0f7a2a418c0d8f658e6ae120b01ba5c1a19fd9cf040618be38aa7ba"} Oct 06 11:54:56 crc kubenswrapper[4698]: I1006 11:54:56.069867 4698 scope.go:117] "RemoveContainer" containerID="b716722665ea296eaba31821da2396c6318752207c56c9a3dc888521bc6f3be5" Oct 06 11:54:56 crc kubenswrapper[4698]: I1006 11:54:56.072862 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-sbjmt" event={"ID":"be67c15a-01a0-435f-995b-f61cd109d8c8","Type":"ContainerStarted","Data":"ec34f8550b77399a13fd8c922c0eb253241b028aa87984142d10dc0172573679"} Oct 06 11:54:56 crc kubenswrapper[4698]: I1006 11:54:56.073513 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-sbjmt" Oct 06 11:54:56 crc kubenswrapper[4698]: I1006 11:54:56.117442 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-sbjmt" podStartSLOduration=2.381205521 podStartE2EDuration="6.117409262s" podCreationTimestamp="2025-10-06 11:54:50 +0000 UTC" firstStartedPulling="2025-10-06 11:54:51.451332797 +0000 UTC m=+578.864024970" lastFinishedPulling="2025-10-06 11:54:55.187536538 +0000 UTC m=+582.600228711" observedRunningTime="2025-10-06 11:54:56.107684698 +0000 UTC m=+583.520376911" watchObservedRunningTime="2025-10-06 11:54:56.117409262 +0000 UTC m=+583.530101475" Oct 06 11:54:58 crc kubenswrapper[4698]: I1006 11:54:58.094131 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-wpvgh" event={"ID":"c57ea6be-96d1-4d4f-8c49-94ee240a5482","Type":"ContainerStarted","Data":"e34986f77429295cab51eceaf7759be33df3ec10f99d17771d679ef75dfd79f1"} Oct 06 11:54:58 crc kubenswrapper[4698]: I1006 11:54:58.095658 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-xncst" event={"ID":"b73b818b-7d2e-4c3f-9622-77ee5c1fc72d","Type":"ContainerStarted","Data":"ae1e85e7ced1bacd83694ee92972628234de49684c072dc9d3cd6220b058c7b0"} Oct 06 11:54:58 crc kubenswrapper[4698]: I1006 11:54:58.113869 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-wpvgh" podStartSLOduration=2.671606334 podStartE2EDuration="8.1138475s" podCreationTimestamp="2025-10-06 11:54:50 +0000 UTC" firstStartedPulling="2025-10-06 11:54:51.561382391 +0000 UTC m=+578.974074584" lastFinishedPulling="2025-10-06 11:54:57.003623577 +0000 UTC m=+584.416315750" observedRunningTime="2025-10-06 11:54:58.112363824 +0000 UTC m=+585.525056007" watchObservedRunningTime="2025-10-06 11:54:58.1138475 +0000 UTC m=+585.526539673" Oct 06 11:54:58 crc kubenswrapper[4698]: I1006 11:54:58.135838 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-xncst" podStartSLOduration=2.705723419 podStartE2EDuration="8.13581484s" podCreationTimestamp="2025-10-06 11:54:50 +0000 UTC" firstStartedPulling="2025-10-06 11:54:51.564470089 +0000 UTC m=+578.977162282" lastFinishedPulling="2025-10-06 11:54:56.99456153 +0000 UTC m=+584.407253703" observedRunningTime="2025-10-06 11:54:58.132001836 +0000 UTC m=+585.544694059" watchObservedRunningTime="2025-10-06 11:54:58.13581484 +0000 UTC m=+585.548507013" Oct 06 11:55:01 crc kubenswrapper[4698]: I1006 11:55:01.128848 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-sbjmt" Oct 06 11:55:01 crc kubenswrapper[4698]: I1006 11:55:01.398770 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sz4ws"] Oct 06 11:55:01 crc kubenswrapper[4698]: I1006 11:55:01.400483 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovn-controller" containerID="cri-o://8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8" gracePeriod=30 Oct 06 11:55:01 crc kubenswrapper[4698]: I1006 11:55:01.400566 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="nbdb" containerID="cri-o://2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3" gracePeriod=30 Oct 06 11:55:01 crc kubenswrapper[4698]: I1006 11:55:01.400733 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="northd" containerID="cri-o://b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93" gracePeriod=30 Oct 06 11:55:01 crc kubenswrapper[4698]: I1006 11:55:01.400812 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0" gracePeriod=30 Oct 06 11:55:01 crc kubenswrapper[4698]: I1006 11:55:01.400883 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="kube-rbac-proxy-node" containerID="cri-o://8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910" gracePeriod=30 Oct 06 11:55:01 crc kubenswrapper[4698]: I1006 11:55:01.400984 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovn-acl-logging" containerID="cri-o://84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22" gracePeriod=30 Oct 06 11:55:01 crc kubenswrapper[4698]: I1006 11:55:01.401101 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="sbdb" containerID="cri-o://491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31" gracePeriod=30 Oct 06 11:55:01 crc kubenswrapper[4698]: I1006 11:55:01.457650 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" containerID="cri-o://cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f" gracePeriod=30 Oct 06 11:55:01 crc kubenswrapper[4698]: E1006 11:55:01.550828 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc16ee453_14bb_4f57_addd_3fc27cb739de.slice/crio-8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode581ae92_9ea3_40a6_abd4_09eb81bb5be4.slice/crio-3f1716b87d8466e0152842788eca9053d0fc39840337230f350c887ce4b4d14c.scope\": RecentStats: unable to find data in memory cache]" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.116369 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/3.log" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.120189 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovn-acl-logging/0.log" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.121374 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovn-controller/0.log" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.122119 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.127803 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovnkube-controller/3.log" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.132465 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovn-acl-logging/0.log" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133231 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sz4ws_c16ee453-14bb-4f57-addd-3fc27cb739de/ovn-controller/0.log" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133622 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f" exitCode=0 Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133662 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31" exitCode=0 Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133676 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3" exitCode=0 Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133688 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93" exitCode=0 Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133700 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0" exitCode=0 Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133712 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910" exitCode=0 Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133725 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22" exitCode=143 Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133721 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133747 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133794 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133866 4698 scope.go:117] "RemoveContainer" containerID="cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133882 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133902 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133920 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133934 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133955 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133972 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133980 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133988 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133996 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134003 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134028 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134039 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134048 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134060 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134073 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134084 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134091 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134098 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134105 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134112 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134120 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134128 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.133737 4698 generic.go:334] "Generic (PLEG): container finished" podID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerID="8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8" exitCode=143 Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134135 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134230 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134270 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134308 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134320 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134334 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134342 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134349 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134356 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134367 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134376 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134388 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134398 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134413 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sz4ws" event={"ID":"c16ee453-14bb-4f57-addd-3fc27cb739de","Type":"ContainerDied","Data":"c4485aa5a84e67954bcc3496b5baf885e13b244e38470ec6802e752acac1e4e3"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134429 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134437 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134445 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134457 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134466 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134477 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134488 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134499 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134510 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.134521 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.136880 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f8bs_e581ae92-9ea3-40a6-abd4-09eb81bb5be4/kube-multus/2.log" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.137660 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f8bs_e581ae92-9ea3-40a6-abd4-09eb81bb5be4/kube-multus/1.log" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.137699 4698 generic.go:334] "Generic (PLEG): container finished" podID="e581ae92-9ea3-40a6-abd4-09eb81bb5be4" containerID="3f1716b87d8466e0152842788eca9053d0fc39840337230f350c887ce4b4d14c" exitCode=2 Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.137735 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f8bs" event={"ID":"e581ae92-9ea3-40a6-abd4-09eb81bb5be4","Type":"ContainerDied","Data":"3f1716b87d8466e0152842788eca9053d0fc39840337230f350c887ce4b4d14c"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.137762 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9"} Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.138074 4698 scope.go:117] "RemoveContainer" containerID="3f1716b87d8466e0152842788eca9053d0fc39840337230f350c887ce4b4d14c" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.138386 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4f8bs_openshift-multus(e581ae92-9ea3-40a6-abd4-09eb81bb5be4)\"" pod="openshift-multus/multus-4f8bs" podUID="e581ae92-9ea3-40a6-abd4-09eb81bb5be4" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.163504 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191007 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k666p"] Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191349 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191376 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191388 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191397 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191411 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191420 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191431 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovn-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191441 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovn-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191457 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="kube-rbac-proxy-node" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191466 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="kube-rbac-proxy-node" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191479 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191489 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191500 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="sbdb" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191508 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="sbdb" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191526 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="kubecfg-setup" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191535 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="kubecfg-setup" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191543 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="nbdb" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191550 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="nbdb" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191561 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovn-acl-logging" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191568 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovn-acl-logging" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191578 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191585 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191593 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="northd" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191600 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="northd" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191725 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191743 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191756 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovn-acl-logging" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191764 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="nbdb" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191772 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="kube-rbac-proxy-node" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191781 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="northd" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191791 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="sbdb" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191802 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191811 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191821 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191831 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovn-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.191951 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.191962 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.192128 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" containerName="ovnkube-controller" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.196676 4698 scope.go:117] "RemoveContainer" containerID="491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.210684 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.226487 4698 scope.go:117] "RemoveContainer" containerID="2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.249432 4698 scope.go:117] "RemoveContainer" containerID="b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.266077 4698 scope.go:117] "RemoveContainer" containerID="4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.283478 4698 scope.go:117] "RemoveContainer" containerID="8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295313 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-bin\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295355 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-log-socket\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295407 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-script-lib\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295428 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-systemd-units\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295446 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-netd\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295465 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-slash\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295485 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-kubelet\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295502 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-etc-openvswitch\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295528 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-config\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295547 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295568 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-node-log\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295619 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-node-log" (OuterVolumeSpecName: "node-log") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295651 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-log-socket" (OuterVolumeSpecName: "log-socket") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295660 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-ovn-kubernetes\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295695 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtv5j\" (UniqueName: \"kubernetes.io/projected/c16ee453-14bb-4f57-addd-3fc27cb739de-kube-api-access-gtv5j\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295785 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-openvswitch\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295819 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-netns\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295860 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-var-lib-openvswitch\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295883 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295926 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c16ee453-14bb-4f57-addd-3fc27cb739de-ovn-node-metrics-cert\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295959 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-systemd\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.295982 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-ovn\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296009 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-env-overrides\") pod \"c16ee453-14bb-4f57-addd-3fc27cb739de\" (UID: \"c16ee453-14bb-4f57-addd-3fc27cb739de\") " Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296119 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-slash" (OuterVolumeSpecName: "host-slash") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296105 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296255 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296277 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296307 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296167 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296149 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296167 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296195 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296268 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296372 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296604 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296844 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296864 4698 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296922 4698 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296949 4698 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.296965 4698 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297000 4698 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-log-socket\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297036 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297053 4698 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297070 4698 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297087 4698 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-slash\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297103 4698 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297090 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297121 4698 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297155 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297173 4698 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-node-log\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297187 4698 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.297210 4698 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.299696 4698 scope.go:117] "RemoveContainer" containerID="84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.303614 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16ee453-14bb-4f57-addd-3fc27cb739de-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.304554 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16ee453-14bb-4f57-addd-3fc27cb739de-kube-api-access-gtv5j" (OuterVolumeSpecName: "kube-api-access-gtv5j") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "kube-api-access-gtv5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.313316 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c16ee453-14bb-4f57-addd-3fc27cb739de" (UID: "c16ee453-14bb-4f57-addd-3fc27cb739de"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.323203 4698 scope.go:117] "RemoveContainer" containerID="8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.340555 4698 scope.go:117] "RemoveContainer" containerID="205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.359624 4698 scope.go:117] "RemoveContainer" containerID="cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.360230 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": container with ID starting with cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f not found: ID does not exist" containerID="cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.360276 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f"} err="failed to get container status \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": rpc error: code = NotFound desc = could not find container \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": container with ID starting with cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.360307 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.360655 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\": container with ID starting with 1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7 not found: ID does not exist" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.360707 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7"} err="failed to get container status \"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\": rpc error: code = NotFound desc = could not find container \"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\": container with ID starting with 1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.360746 4698 scope.go:117] "RemoveContainer" containerID="491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.362048 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\": container with ID starting with 491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31 not found: ID does not exist" containerID="491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.362086 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31"} err="failed to get container status \"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\": rpc error: code = NotFound desc = could not find container \"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\": container with ID starting with 491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.362103 4698 scope.go:117] "RemoveContainer" containerID="2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.362372 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\": container with ID starting with 2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3 not found: ID does not exist" containerID="2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.362406 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3"} err="failed to get container status \"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\": rpc error: code = NotFound desc = could not find container \"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\": container with ID starting with 2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.362425 4698 scope.go:117] "RemoveContainer" containerID="b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.362890 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\": container with ID starting with b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93 not found: ID does not exist" containerID="b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.362925 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93"} err="failed to get container status \"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\": rpc error: code = NotFound desc = could not find container \"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\": container with ID starting with b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.362948 4698 scope.go:117] "RemoveContainer" containerID="4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.363489 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\": container with ID starting with 4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0 not found: ID does not exist" containerID="4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.363517 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0"} err="failed to get container status \"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\": rpc error: code = NotFound desc = could not find container \"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\": container with ID starting with 4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.363536 4698 scope.go:117] "RemoveContainer" containerID="8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.363853 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\": container with ID starting with 8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910 not found: ID does not exist" containerID="8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.363877 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910"} err="failed to get container status \"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\": rpc error: code = NotFound desc = could not find container \"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\": container with ID starting with 8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.363894 4698 scope.go:117] "RemoveContainer" containerID="84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.364333 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\": container with ID starting with 84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22 not found: ID does not exist" containerID="84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.364363 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22"} err="failed to get container status \"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\": rpc error: code = NotFound desc = could not find container \"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\": container with ID starting with 84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.364383 4698 scope.go:117] "RemoveContainer" containerID="8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.364845 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\": container with ID starting with 8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8 not found: ID does not exist" containerID="8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.364879 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8"} err="failed to get container status \"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\": rpc error: code = NotFound desc = could not find container \"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\": container with ID starting with 8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.364900 4698 scope.go:117] "RemoveContainer" containerID="205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17" Oct 06 11:55:02 crc kubenswrapper[4698]: E1006 11:55:02.365275 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\": container with ID starting with 205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17 not found: ID does not exist" containerID="205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.365302 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17"} err="failed to get container status \"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\": rpc error: code = NotFound desc = could not find container \"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\": container with ID starting with 205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.365317 4698 scope.go:117] "RemoveContainer" containerID="cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.365942 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f"} err="failed to get container status \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": rpc error: code = NotFound desc = could not find container \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": container with ID starting with cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.365963 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.366375 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7"} err="failed to get container status \"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\": rpc error: code = NotFound desc = could not find container \"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\": container with ID starting with 1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.366409 4698 scope.go:117] "RemoveContainer" containerID="491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.366783 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31"} err="failed to get container status \"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\": rpc error: code = NotFound desc = could not find container \"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\": container with ID starting with 491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.366821 4698 scope.go:117] "RemoveContainer" containerID="2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.367459 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3"} err="failed to get container status \"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\": rpc error: code = NotFound desc = could not find container \"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\": container with ID starting with 2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.367483 4698 scope.go:117] "RemoveContainer" containerID="b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.367836 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93"} err="failed to get container status \"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\": rpc error: code = NotFound desc = could not find container \"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\": container with ID starting with b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.367863 4698 scope.go:117] "RemoveContainer" containerID="4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.368217 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0"} err="failed to get container status \"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\": rpc error: code = NotFound desc = could not find container \"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\": container with ID starting with 4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.368244 4698 scope.go:117] "RemoveContainer" containerID="8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.368600 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910"} err="failed to get container status \"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\": rpc error: code = NotFound desc = could not find container \"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\": container with ID starting with 8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.368622 4698 scope.go:117] "RemoveContainer" containerID="84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.369274 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22"} err="failed to get container status \"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\": rpc error: code = NotFound desc = could not find container \"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\": container with ID starting with 84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.369390 4698 scope.go:117] "RemoveContainer" containerID="8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.371599 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8"} err="failed to get container status \"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\": rpc error: code = NotFound desc = could not find container \"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\": container with ID starting with 8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.371627 4698 scope.go:117] "RemoveContainer" containerID="205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.371906 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17"} err="failed to get container status \"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\": rpc error: code = NotFound desc = could not find container \"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\": container with ID starting with 205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.371939 4698 scope.go:117] "RemoveContainer" containerID="cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.372257 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f"} err="failed to get container status \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": rpc error: code = NotFound desc = could not find container \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": container with ID starting with cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.372890 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.373347 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7"} err="failed to get container status \"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\": rpc error: code = NotFound desc = could not find container \"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\": container with ID starting with 1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.373383 4698 scope.go:117] "RemoveContainer" containerID="491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.373802 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31"} err="failed to get container status \"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\": rpc error: code = NotFound desc = could not find container \"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\": container with ID starting with 491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.373829 4698 scope.go:117] "RemoveContainer" containerID="2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.374254 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3"} err="failed to get container status \"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\": rpc error: code = NotFound desc = could not find container \"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\": container with ID starting with 2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.374357 4698 scope.go:117] "RemoveContainer" containerID="b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.374793 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93"} err="failed to get container status \"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\": rpc error: code = NotFound desc = could not find container \"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\": container with ID starting with b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.374834 4698 scope.go:117] "RemoveContainer" containerID="4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.375245 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0"} err="failed to get container status \"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\": rpc error: code = NotFound desc = could not find container \"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\": container with ID starting with 4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.375347 4698 scope.go:117] "RemoveContainer" containerID="8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.375755 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910"} err="failed to get container status \"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\": rpc error: code = NotFound desc = could not find container \"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\": container with ID starting with 8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.376396 4698 scope.go:117] "RemoveContainer" containerID="84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.376947 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22"} err="failed to get container status \"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\": rpc error: code = NotFound desc = could not find container \"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\": container with ID starting with 84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.376988 4698 scope.go:117] "RemoveContainer" containerID="8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.377610 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8"} err="failed to get container status \"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\": rpc error: code = NotFound desc = could not find container \"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\": container with ID starting with 8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.377723 4698 scope.go:117] "RemoveContainer" containerID="205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.378299 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17"} err="failed to get container status \"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\": rpc error: code = NotFound desc = could not find container \"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\": container with ID starting with 205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.378399 4698 scope.go:117] "RemoveContainer" containerID="cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.378920 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f"} err="failed to get container status \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": rpc error: code = NotFound desc = could not find container \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": container with ID starting with cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.379087 4698 scope.go:117] "RemoveContainer" containerID="1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.379593 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7"} err="failed to get container status \"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\": rpc error: code = NotFound desc = could not find container \"1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7\": container with ID starting with 1375b88900cdd1a4ab21df751c39790f731f001b8c1b39a71268cde7bdb984f7 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.379666 4698 scope.go:117] "RemoveContainer" containerID="491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.380361 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31"} err="failed to get container status \"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\": rpc error: code = NotFound desc = could not find container \"491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31\": container with ID starting with 491a9159044b5a6739f0342dcd9d18c913da6ddd28b975652bd8ae8e8dcefb31 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.380904 4698 scope.go:117] "RemoveContainer" containerID="2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.381987 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3"} err="failed to get container status \"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\": rpc error: code = NotFound desc = could not find container \"2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3\": container with ID starting with 2a6f8bae5ddf5681cb1dfdffee5df24bd25fe2e195af410e61b73f4423b662c3 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.382024 4698 scope.go:117] "RemoveContainer" containerID="b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.382274 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93"} err="failed to get container status \"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\": rpc error: code = NotFound desc = could not find container \"b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93\": container with ID starting with b25b8d7a3301b8ded0d7f98399eda7629408fe40d967f14e2d3a21f8c859df93 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.382294 4698 scope.go:117] "RemoveContainer" containerID="4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.382603 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0"} err="failed to get container status \"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\": rpc error: code = NotFound desc = could not find container \"4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0\": container with ID starting with 4d49da1d0ebeeb33da8e88d76fc6f413883ef787b8f656806f5f9197b29155b0 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.382701 4698 scope.go:117] "RemoveContainer" containerID="8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.383117 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910"} err="failed to get container status \"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\": rpc error: code = NotFound desc = could not find container \"8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910\": container with ID starting with 8bbc201a64b48f484e15a46d51a365b2e47485daae3cf91a4a1171acca616910 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.383136 4698 scope.go:117] "RemoveContainer" containerID="84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.383402 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22"} err="failed to get container status \"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\": rpc error: code = NotFound desc = could not find container \"84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22\": container with ID starting with 84de18cd73b8c545eb0d68e49341cc37b6908ddd185ff39e2345bcaf56da9d22 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.383488 4698 scope.go:117] "RemoveContainer" containerID="8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.383958 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8"} err="failed to get container status \"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\": rpc error: code = NotFound desc = could not find container \"8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8\": container with ID starting with 8014cd3693df45cc1fadc13e0bb14ffeeac98ade06e6114994ae500d79657ec8 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.383983 4698 scope.go:117] "RemoveContainer" containerID="205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.384449 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17"} err="failed to get container status \"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\": rpc error: code = NotFound desc = could not find container \"205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17\": container with ID starting with 205bd4d226fa0410135ba1e4011b3a3fc2ad9ab5ac87f8f3b1c1602cede8ac17 not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.384506 4698 scope.go:117] "RemoveContainer" containerID="cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.390783 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f"} err="failed to get container status \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": rpc error: code = NotFound desc = could not find container \"cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f\": container with ID starting with cb647b8f986bf8b37c05f914d99f5f0264ba7a6eb9d0378104d8dc28281dcd0f not found: ID does not exist" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.398807 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-log-socket\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.398897 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-var-lib-openvswitch\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.398928 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-cni-bin\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.398954 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-ovnkube-script-lib\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.398978 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399059 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-ovn-node-metrics-cert\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399102 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-slash\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399131 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-etc-openvswitch\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399165 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-run-systemd\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399187 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-env-overrides\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399449 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-kubelet\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399575 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pk9v\" (UniqueName: \"kubernetes.io/projected/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-kube-api-access-8pk9v\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399644 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-cni-netd\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399743 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-run-ovn\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399812 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-run-ovn-kubernetes\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399871 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-run-openvswitch\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399935 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-systemd-units\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.399978 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-ovnkube-config\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.400034 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-node-log\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.400063 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-run-netns\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.400163 4698 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.400215 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtv5j\" (UniqueName: \"kubernetes.io/projected/c16ee453-14bb-4f57-addd-3fc27cb739de-kube-api-access-gtv5j\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.400229 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c16ee453-14bb-4f57-addd-3fc27cb739de-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.400243 4698 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c16ee453-14bb-4f57-addd-3fc27cb739de-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.400254 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c16ee453-14bb-4f57-addd-3fc27cb739de-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.467115 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sz4ws"] Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.472563 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sz4ws"] Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.501368 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-run-systemd\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.501462 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-env-overrides\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.501509 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-run-systemd\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.501515 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-kubelet\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.501591 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-kubelet\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.501648 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pk9v\" (UniqueName: \"kubernetes.io/projected/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-kube-api-access-8pk9v\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.501844 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-cni-netd\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.501926 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-run-ovn\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.501955 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-cni-netd\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.501974 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-run-ovn-kubernetes\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502051 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-run-ovn\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502066 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-run-openvswitch\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-run-openvswitch\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502115 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-run-ovn-kubernetes\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502174 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-systemd-units\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502144 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-systemd-units\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502237 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-ovnkube-config\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-node-log\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502335 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-run-netns\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502377 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-log-socket\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502388 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-node-log\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502423 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-var-lib-openvswitch\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502436 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-run-netns\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502455 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-log-socket\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502466 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-cni-bin\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502498 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-ovn-node-metrics-cert\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502517 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-cni-bin\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502522 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-ovnkube-script-lib\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502535 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-env-overrides\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.502675 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-var-lib-openvswitch\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.503274 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-ovnkube-script-lib\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.503343 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.503407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.503483 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-slash\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.503506 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-etc-openvswitch\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.503557 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-host-slash\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.503657 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-etc-openvswitch\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.503659 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-ovnkube-config\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.506625 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-ovn-node-metrics-cert\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.522133 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pk9v\" (UniqueName: \"kubernetes.io/projected/dc58c026-2a39-4797-a3d0-5ea54c1f6fa0-kube-api-access-8pk9v\") pod \"ovnkube-node-k666p\" (UID: \"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:02 crc kubenswrapper[4698]: I1006 11:55:02.525127 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:03 crc kubenswrapper[4698]: I1006 11:55:03.146422 4698 generic.go:334] "Generic (PLEG): container finished" podID="dc58c026-2a39-4797-a3d0-5ea54c1f6fa0" containerID="1bfe7d37c5b7b75df6edc7d30d0a3494381b5b233232e1523f3f42cd1e1885be" exitCode=0 Oct 06 11:55:03 crc kubenswrapper[4698]: I1006 11:55:03.146508 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" event={"ID":"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0","Type":"ContainerDied","Data":"1bfe7d37c5b7b75df6edc7d30d0a3494381b5b233232e1523f3f42cd1e1885be"} Oct 06 11:55:03 crc kubenswrapper[4698]: I1006 11:55:03.146544 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" event={"ID":"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0","Type":"ContainerStarted","Data":"03e7cc7c2c04568b631a1c4ba9c1206cdbb30767ad856f3a268968e0734b6086"} Oct 06 11:55:03 crc kubenswrapper[4698]: I1006 11:55:03.338470 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c16ee453-14bb-4f57-addd-3fc27cb739de" path="/var/lib/kubelet/pods/c16ee453-14bb-4f57-addd-3fc27cb739de/volumes" Oct 06 11:55:04 crc kubenswrapper[4698]: I1006 11:55:04.162793 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" event={"ID":"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0","Type":"ContainerStarted","Data":"16c9792b468a276acba6673dfec20e0e89c46590ba744c2833654d0041537282"} Oct 06 11:55:04 crc kubenswrapper[4698]: I1006 11:55:04.162854 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" event={"ID":"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0","Type":"ContainerStarted","Data":"4c5b06a5297fcf26c62a13140c6ff7e58b233cfe8d2bd246b0dda268ff196aa8"} Oct 06 11:55:04 crc kubenswrapper[4698]: I1006 11:55:04.162877 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" event={"ID":"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0","Type":"ContainerStarted","Data":"1978b2b226f65eb76e125dc4b6d47e8039ae0e8a5c97d39ed6d3c2bb98075159"} Oct 06 11:55:04 crc kubenswrapper[4698]: I1006 11:55:04.162892 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" event={"ID":"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0","Type":"ContainerStarted","Data":"5c9a94b08b2618187a7e7ba1b0274560ffc18dedeeee28b27c32fd6976b8d332"} Oct 06 11:55:05 crc kubenswrapper[4698]: I1006 11:55:05.187734 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" event={"ID":"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0","Type":"ContainerStarted","Data":"e9c898306d6cb64918cb3d2976550c7ecb658d88b88ca6f148f9bb486d3b38d4"} Oct 06 11:55:05 crc kubenswrapper[4698]: I1006 11:55:05.188204 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" event={"ID":"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0","Type":"ContainerStarted","Data":"42011711c07cdd1a57daf9b00d833053cb8d75f6cdb962cd4711a9510089c838"} Oct 06 11:55:07 crc kubenswrapper[4698]: I1006 11:55:07.207265 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" event={"ID":"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0","Type":"ContainerStarted","Data":"e2f33cf2fb64399bd0c54173c560884c4745506a110e4d79197b013f3e7ef99a"} Oct 06 11:55:09 crc kubenswrapper[4698]: I1006 11:55:09.226081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" event={"ID":"dc58c026-2a39-4797-a3d0-5ea54c1f6fa0","Type":"ContainerStarted","Data":"635d8c41ac580e610a34c9cdf39af6e5283cc9f182d9e1d89207c057501b905f"} Oct 06 11:55:09 crc kubenswrapper[4698]: I1006 11:55:09.228307 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:09 crc kubenswrapper[4698]: I1006 11:55:09.228375 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:09 crc kubenswrapper[4698]: I1006 11:55:09.228391 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:09 crc kubenswrapper[4698]: I1006 11:55:09.269505 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" podStartSLOduration=7.269485549 podStartE2EDuration="7.269485549s" podCreationTimestamp="2025-10-06 11:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:55:09.267503651 +0000 UTC m=+596.680195834" watchObservedRunningTime="2025-10-06 11:55:09.269485549 +0000 UTC m=+596.682177722" Oct 06 11:55:09 crc kubenswrapper[4698]: I1006 11:55:09.299580 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:09 crc kubenswrapper[4698]: I1006 11:55:09.304758 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:13 crc kubenswrapper[4698]: I1006 11:55:13.539888 4698 scope.go:117] "RemoveContainer" containerID="be97dd896e48b6568ad734b601d530fd8b18a4455a970ed23490937247bfc9e9" Oct 06 11:55:14 crc kubenswrapper[4698]: I1006 11:55:14.298167 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f8bs_e581ae92-9ea3-40a6-abd4-09eb81bb5be4/kube-multus/2.log" Oct 06 11:55:14 crc kubenswrapper[4698]: I1006 11:55:14.328651 4698 scope.go:117] "RemoveContainer" containerID="3f1716b87d8466e0152842788eca9053d0fc39840337230f350c887ce4b4d14c" Oct 06 11:55:14 crc kubenswrapper[4698]: E1006 11:55:14.329038 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4f8bs_openshift-multus(e581ae92-9ea3-40a6-abd4-09eb81bb5be4)\"" pod="openshift-multus/multus-4f8bs" podUID="e581ae92-9ea3-40a6-abd4-09eb81bb5be4" Oct 06 11:55:27 crc kubenswrapper[4698]: I1006 11:55:27.330322 4698 scope.go:117] "RemoveContainer" containerID="3f1716b87d8466e0152842788eca9053d0fc39840337230f350c887ce4b4d14c" Oct 06 11:55:28 crc kubenswrapper[4698]: I1006 11:55:28.409407 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f8bs_e581ae92-9ea3-40a6-abd4-09eb81bb5be4/kube-multus/2.log" Oct 06 11:55:28 crc kubenswrapper[4698]: I1006 11:55:28.409894 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f8bs" event={"ID":"e581ae92-9ea3-40a6-abd4-09eb81bb5be4","Type":"ContainerStarted","Data":"293c30867d485040843909874ad14617431f2d7e1b1428a02d97e08eb56aa9a5"} Oct 06 11:55:28 crc kubenswrapper[4698]: I1006 11:55:28.724298 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx"] Oct 06 11:55:28 crc kubenswrapper[4698]: I1006 11:55:28.725458 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:28 crc kubenswrapper[4698]: I1006 11:55:28.738513 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx"] Oct 06 11:55:28 crc kubenswrapper[4698]: I1006 11:55:28.742558 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 06 11:55:28 crc kubenswrapper[4698]: I1006 11:55:28.928060 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvzm9\" (UniqueName: \"kubernetes.io/projected/c3de473e-3386-4a45-bcf2-a98bab1b6c55-kube-api-access-jvzm9\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:28 crc kubenswrapper[4698]: I1006 11:55:28.928144 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:28 crc kubenswrapper[4698]: I1006 11:55:28.928370 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:29 crc kubenswrapper[4698]: I1006 11:55:29.029857 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:29 crc kubenswrapper[4698]: I1006 11:55:29.030528 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:29 crc kubenswrapper[4698]: I1006 11:55:29.030649 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvzm9\" (UniqueName: \"kubernetes.io/projected/c3de473e-3386-4a45-bcf2-a98bab1b6c55-kube-api-access-jvzm9\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:29 crc kubenswrapper[4698]: I1006 11:55:29.030682 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:29 crc kubenswrapper[4698]: I1006 11:55:29.031466 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:29 crc kubenswrapper[4698]: I1006 11:55:29.056423 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvzm9\" (UniqueName: \"kubernetes.io/projected/c3de473e-3386-4a45-bcf2-a98bab1b6c55-kube-api-access-jvzm9\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:29 crc kubenswrapper[4698]: I1006 11:55:29.347584 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:29 crc kubenswrapper[4698]: I1006 11:55:29.654378 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx"] Oct 06 11:55:29 crc kubenswrapper[4698]: W1006 11:55:29.669540 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3de473e_3386_4a45_bcf2_a98bab1b6c55.slice/crio-29ec312caba8078c0169ea9d8e8c89a4513a7cff44920c1e7ba114c93092b182 WatchSource:0}: Error finding container 29ec312caba8078c0169ea9d8e8c89a4513a7cff44920c1e7ba114c93092b182: Status 404 returned error can't find the container with id 29ec312caba8078c0169ea9d8e8c89a4513a7cff44920c1e7ba114c93092b182 Oct 06 11:55:30 crc kubenswrapper[4698]: I1006 11:55:30.430456 4698 generic.go:334] "Generic (PLEG): container finished" podID="c3de473e-3386-4a45-bcf2-a98bab1b6c55" containerID="f89a114b05aceda1f96b30c8a1b35ff30d9ff62bb8d931b7be689d60487932f1" exitCode=0 Oct 06 11:55:30 crc kubenswrapper[4698]: I1006 11:55:30.430526 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" event={"ID":"c3de473e-3386-4a45-bcf2-a98bab1b6c55","Type":"ContainerDied","Data":"f89a114b05aceda1f96b30c8a1b35ff30d9ff62bb8d931b7be689d60487932f1"} Oct 06 11:55:30 crc kubenswrapper[4698]: I1006 11:55:30.430578 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" event={"ID":"c3de473e-3386-4a45-bcf2-a98bab1b6c55","Type":"ContainerStarted","Data":"29ec312caba8078c0169ea9d8e8c89a4513a7cff44920c1e7ba114c93092b182"} Oct 06 11:55:32 crc kubenswrapper[4698]: I1006 11:55:32.449005 4698 generic.go:334] "Generic (PLEG): container finished" podID="c3de473e-3386-4a45-bcf2-a98bab1b6c55" containerID="b6a72cc5feb3d13270b6eea57e0c03d52a8a15690b758e4e5994efb5d91bc703" exitCode=0 Oct 06 11:55:32 crc kubenswrapper[4698]: I1006 11:55:32.449206 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" event={"ID":"c3de473e-3386-4a45-bcf2-a98bab1b6c55","Type":"ContainerDied","Data":"b6a72cc5feb3d13270b6eea57e0c03d52a8a15690b758e4e5994efb5d91bc703"} Oct 06 11:55:32 crc kubenswrapper[4698]: I1006 11:55:32.566568 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k666p" Oct 06 11:55:33 crc kubenswrapper[4698]: I1006 11:55:33.462806 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" event={"ID":"c3de473e-3386-4a45-bcf2-a98bab1b6c55","Type":"ContainerStarted","Data":"4a37032fc53430b4a8e0b0ae96d39bc2c5108865ccfde5b0ec938deedcae34bc"} Oct 06 11:55:33 crc kubenswrapper[4698]: I1006 11:55:33.494400 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" podStartSLOduration=4.108965278 podStartE2EDuration="5.494364019s" podCreationTimestamp="2025-10-06 11:55:28 +0000 UTC" firstStartedPulling="2025-10-06 11:55:30.433558425 +0000 UTC m=+617.846250628" lastFinishedPulling="2025-10-06 11:55:31.818957186 +0000 UTC m=+619.231649369" observedRunningTime="2025-10-06 11:55:33.490700868 +0000 UTC m=+620.903393081" watchObservedRunningTime="2025-10-06 11:55:33.494364019 +0000 UTC m=+620.907056232" Oct 06 11:55:34 crc kubenswrapper[4698]: I1006 11:55:34.475888 4698 generic.go:334] "Generic (PLEG): container finished" podID="c3de473e-3386-4a45-bcf2-a98bab1b6c55" containerID="4a37032fc53430b4a8e0b0ae96d39bc2c5108865ccfde5b0ec938deedcae34bc" exitCode=0 Oct 06 11:55:34 crc kubenswrapper[4698]: I1006 11:55:34.475967 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" event={"ID":"c3de473e-3386-4a45-bcf2-a98bab1b6c55","Type":"ContainerDied","Data":"4a37032fc53430b4a8e0b0ae96d39bc2c5108865ccfde5b0ec938deedcae34bc"} Oct 06 11:55:35 crc kubenswrapper[4698]: I1006 11:55:35.859330 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:35 crc kubenswrapper[4698]: I1006 11:55:35.958684 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvzm9\" (UniqueName: \"kubernetes.io/projected/c3de473e-3386-4a45-bcf2-a98bab1b6c55-kube-api-access-jvzm9\") pod \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " Oct 06 11:55:35 crc kubenswrapper[4698]: I1006 11:55:35.958804 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-bundle\") pod \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " Oct 06 11:55:35 crc kubenswrapper[4698]: I1006 11:55:35.959002 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-util\") pod \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\" (UID: \"c3de473e-3386-4a45-bcf2-a98bab1b6c55\") " Oct 06 11:55:35 crc kubenswrapper[4698]: I1006 11:55:35.963136 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-bundle" (OuterVolumeSpecName: "bundle") pod "c3de473e-3386-4a45-bcf2-a98bab1b6c55" (UID: "c3de473e-3386-4a45-bcf2-a98bab1b6c55"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:55:35 crc kubenswrapper[4698]: I1006 11:55:35.971356 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3de473e-3386-4a45-bcf2-a98bab1b6c55-kube-api-access-jvzm9" (OuterVolumeSpecName: "kube-api-access-jvzm9") pod "c3de473e-3386-4a45-bcf2-a98bab1b6c55" (UID: "c3de473e-3386-4a45-bcf2-a98bab1b6c55"). InnerVolumeSpecName "kube-api-access-jvzm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:55:35 crc kubenswrapper[4698]: I1006 11:55:35.973118 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-util" (OuterVolumeSpecName: "util") pod "c3de473e-3386-4a45-bcf2-a98bab1b6c55" (UID: "c3de473e-3386-4a45-bcf2-a98bab1b6c55"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:55:36 crc kubenswrapper[4698]: I1006 11:55:36.061352 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:36 crc kubenswrapper[4698]: I1006 11:55:36.061410 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3de473e-3386-4a45-bcf2-a98bab1b6c55-util\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:36 crc kubenswrapper[4698]: I1006 11:55:36.061431 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvzm9\" (UniqueName: \"kubernetes.io/projected/c3de473e-3386-4a45-bcf2-a98bab1b6c55-kube-api-access-jvzm9\") on node \"crc\" DevicePath \"\"" Oct 06 11:55:36 crc kubenswrapper[4698]: I1006 11:55:36.499198 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" event={"ID":"c3de473e-3386-4a45-bcf2-a98bab1b6c55","Type":"ContainerDied","Data":"29ec312caba8078c0169ea9d8e8c89a4513a7cff44920c1e7ba114c93092b182"} Oct 06 11:55:36 crc kubenswrapper[4698]: I1006 11:55:36.499881 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29ec312caba8078c0169ea9d8e8c89a4513a7cff44920c1e7ba114c93092b182" Oct 06 11:55:36 crc kubenswrapper[4698]: I1006 11:55:36.500075 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.118928 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn"] Oct 06 11:55:46 crc kubenswrapper[4698]: E1006 11:55:46.119833 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3de473e-3386-4a45-bcf2-a98bab1b6c55" containerName="util" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.119857 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3de473e-3386-4a45-bcf2-a98bab1b6c55" containerName="util" Oct 06 11:55:46 crc kubenswrapper[4698]: E1006 11:55:46.119888 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3de473e-3386-4a45-bcf2-a98bab1b6c55" containerName="extract" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.119899 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3de473e-3386-4a45-bcf2-a98bab1b6c55" containerName="extract" Oct 06 11:55:46 crc kubenswrapper[4698]: E1006 11:55:46.119911 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3de473e-3386-4a45-bcf2-a98bab1b6c55" containerName="pull" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.119921 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3de473e-3386-4a45-bcf2-a98bab1b6c55" containerName="pull" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.120162 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3de473e-3386-4a45-bcf2-a98bab1b6c55" containerName="extract" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.120828 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.122512 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-fckct" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.123653 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.124733 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.132756 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn"] Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.207614 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kmjf\" (UniqueName: \"kubernetes.io/projected/795598bd-9625-48d4-8b2b-9d5d5418391a-kube-api-access-6kmjf\") pod \"obo-prometheus-operator-7c8cf85677-4vjwn\" (UID: \"795598bd-9625-48d4-8b2b-9d5d5418391a\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.239445 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx"] Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.240205 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.243225 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.243484 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-kqtj6" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.267071 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j"] Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.267979 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.269961 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx"] Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.278648 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j"] Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.308466 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kmjf\" (UniqueName: \"kubernetes.io/projected/795598bd-9625-48d4-8b2b-9d5d5418391a-kube-api-access-6kmjf\") pod \"obo-prometheus-operator-7c8cf85677-4vjwn\" (UID: \"795598bd-9625-48d4-8b2b-9d5d5418391a\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.308519 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4be74b3-b8b9-45af-b971-bd29e82d0879-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx\" (UID: \"d4be74b3-b8b9-45af-b971-bd29e82d0879\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.308589 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d4be74b3-b8b9-45af-b971-bd29e82d0879-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx\" (UID: \"d4be74b3-b8b9-45af-b971-bd29e82d0879\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.308680 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2986e2db-d42d-417a-b203-1eb36ae90468-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j\" (UID: \"2986e2db-d42d-417a-b203-1eb36ae90468\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.308782 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2986e2db-d42d-417a-b203-1eb36ae90468-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j\" (UID: \"2986e2db-d42d-417a-b203-1eb36ae90468\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.360443 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kmjf\" (UniqueName: \"kubernetes.io/projected/795598bd-9625-48d4-8b2b-9d5d5418391a-kube-api-access-6kmjf\") pod \"obo-prometheus-operator-7c8cf85677-4vjwn\" (UID: \"795598bd-9625-48d4-8b2b-9d5d5418391a\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.410166 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2986e2db-d42d-417a-b203-1eb36ae90468-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j\" (UID: \"2986e2db-d42d-417a-b203-1eb36ae90468\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.410227 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2986e2db-d42d-417a-b203-1eb36ae90468-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j\" (UID: \"2986e2db-d42d-417a-b203-1eb36ae90468\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.410278 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4be74b3-b8b9-45af-b971-bd29e82d0879-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx\" (UID: \"d4be74b3-b8b9-45af-b971-bd29e82d0879\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.411129 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d4be74b3-b8b9-45af-b971-bd29e82d0879-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx\" (UID: \"d4be74b3-b8b9-45af-b971-bd29e82d0879\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.421842 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4be74b3-b8b9-45af-b971-bd29e82d0879-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx\" (UID: \"d4be74b3-b8b9-45af-b971-bd29e82d0879\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.422695 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2986e2db-d42d-417a-b203-1eb36ae90468-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j\" (UID: \"2986e2db-d42d-417a-b203-1eb36ae90468\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.428639 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d4be74b3-b8b9-45af-b971-bd29e82d0879-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx\" (UID: \"d4be74b3-b8b9-45af-b971-bd29e82d0879\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.434705 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2986e2db-d42d-417a-b203-1eb36ae90468-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j\" (UID: \"2986e2db-d42d-417a-b203-1eb36ae90468\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.441489 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.499812 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-9sp88"] Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.500961 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.504067 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-l9s6x" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.504280 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.512652 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv78s\" (UniqueName: \"kubernetes.io/projected/c711bfcd-11d2-4ad7-8059-9f1f406dd064-kube-api-access-wv78s\") pod \"observability-operator-cc5f78dfc-9sp88\" (UID: \"c711bfcd-11d2-4ad7-8059-9f1f406dd064\") " pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.512741 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c711bfcd-11d2-4ad7-8059-9f1f406dd064-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-9sp88\" (UID: \"c711bfcd-11d2-4ad7-8059-9f1f406dd064\") " pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.562559 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.587677 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.604956 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-9sp88"] Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.613735 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c711bfcd-11d2-4ad7-8059-9f1f406dd064-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-9sp88\" (UID: \"c711bfcd-11d2-4ad7-8059-9f1f406dd064\") " pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.613789 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv78s\" (UniqueName: \"kubernetes.io/projected/c711bfcd-11d2-4ad7-8059-9f1f406dd064-kube-api-access-wv78s\") pod \"observability-operator-cc5f78dfc-9sp88\" (UID: \"c711bfcd-11d2-4ad7-8059-9f1f406dd064\") " pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.621056 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c711bfcd-11d2-4ad7-8059-9f1f406dd064-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-9sp88\" (UID: \"c711bfcd-11d2-4ad7-8059-9f1f406dd064\") " pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.646478 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv78s\" (UniqueName: \"kubernetes.io/projected/c711bfcd-11d2-4ad7-8059-9f1f406dd064-kube-api-access-wv78s\") pod \"observability-operator-cc5f78dfc-9sp88\" (UID: \"c711bfcd-11d2-4ad7-8059-9f1f406dd064\") " pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.679160 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-gl2vd"] Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.679910 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.689485 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-p6w5r" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.694072 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-gl2vd"] Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.716980 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc83ee37-67c1-4393-83c8-9ee46b2c1d30-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-gl2vd\" (UID: \"bc83ee37-67c1-4393-83c8-9ee46b2c1d30\") " pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.717080 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vg4w\" (UniqueName: \"kubernetes.io/projected/bc83ee37-67c1-4393-83c8-9ee46b2c1d30-kube-api-access-6vg4w\") pod \"perses-operator-54bc95c9fb-gl2vd\" (UID: \"bc83ee37-67c1-4393-83c8-9ee46b2c1d30\") " pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.775249 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn"] Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.818744 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc83ee37-67c1-4393-83c8-9ee46b2c1d30-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-gl2vd\" (UID: \"bc83ee37-67c1-4393-83c8-9ee46b2c1d30\") " pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.818815 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vg4w\" (UniqueName: \"kubernetes.io/projected/bc83ee37-67c1-4393-83c8-9ee46b2c1d30-kube-api-access-6vg4w\") pod \"perses-operator-54bc95c9fb-gl2vd\" (UID: \"bc83ee37-67c1-4393-83c8-9ee46b2c1d30\") " pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.819950 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc83ee37-67c1-4393-83c8-9ee46b2c1d30-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-gl2vd\" (UID: \"bc83ee37-67c1-4393-83c8-9ee46b2c1d30\") " pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.839375 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.844565 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vg4w\" (UniqueName: \"kubernetes.io/projected/bc83ee37-67c1-4393-83c8-9ee46b2c1d30-kube-api-access-6vg4w\") pod \"perses-operator-54bc95c9fb-gl2vd\" (UID: \"bc83ee37-67c1-4393-83c8-9ee46b2c1d30\") " pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.856488 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx"] Oct 06 11:55:46 crc kubenswrapper[4698]: W1006 11:55:46.881177 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4be74b3_b8b9_45af_b971_bd29e82d0879.slice/crio-379125ac318d97138c2ae4fd39d74f224fa7f88070b6324278b19f372ffe3a39 WatchSource:0}: Error finding container 379125ac318d97138c2ae4fd39d74f224fa7f88070b6324278b19f372ffe3a39: Status 404 returned error can't find the container with id 379125ac318d97138c2ae4fd39d74f224fa7f88070b6324278b19f372ffe3a39 Oct 06 11:55:46 crc kubenswrapper[4698]: I1006 11:55:46.917156 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j"] Oct 06 11:55:46 crc kubenswrapper[4698]: W1006 11:55:46.936200 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2986e2db_d42d_417a_b203_1eb36ae90468.slice/crio-06d143fc83c3fe6df1a88fbc1fe68ec70245a473ab2cc2bfb5bec7d8c8677e26 WatchSource:0}: Error finding container 06d143fc83c3fe6df1a88fbc1fe68ec70245a473ab2cc2bfb5bec7d8c8677e26: Status 404 returned error can't find the container with id 06d143fc83c3fe6df1a88fbc1fe68ec70245a473ab2cc2bfb5bec7d8c8677e26 Oct 06 11:55:47 crc kubenswrapper[4698]: I1006 11:55:47.033093 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" Oct 06 11:55:47 crc kubenswrapper[4698]: I1006 11:55:47.113336 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-9sp88"] Oct 06 11:55:47 crc kubenswrapper[4698]: W1006 11:55:47.120856 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc711bfcd_11d2_4ad7_8059_9f1f406dd064.slice/crio-e0c95b4d49539caac348d42535396ca6169e9ed19603ba6392c56ee17fcc1097 WatchSource:0}: Error finding container e0c95b4d49539caac348d42535396ca6169e9ed19603ba6392c56ee17fcc1097: Status 404 returned error can't find the container with id e0c95b4d49539caac348d42535396ca6169e9ed19603ba6392c56ee17fcc1097 Oct 06 11:55:47 crc kubenswrapper[4698]: I1006 11:55:47.245624 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-gl2vd"] Oct 06 11:55:47 crc kubenswrapper[4698]: W1006 11:55:47.254601 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc83ee37_67c1_4393_83c8_9ee46b2c1d30.slice/crio-aa2b975785d2acb14ff3c619dfc865d210f43e6c6f9d01ad799241a6a434e418 WatchSource:0}: Error finding container aa2b975785d2acb14ff3c619dfc865d210f43e6c6f9d01ad799241a6a434e418: Status 404 returned error can't find the container with id aa2b975785d2acb14ff3c619dfc865d210f43e6c6f9d01ad799241a6a434e418 Oct 06 11:55:47 crc kubenswrapper[4698]: I1006 11:55:47.573445 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" event={"ID":"2986e2db-d42d-417a-b203-1eb36ae90468","Type":"ContainerStarted","Data":"06d143fc83c3fe6df1a88fbc1fe68ec70245a473ab2cc2bfb5bec7d8c8677e26"} Oct 06 11:55:47 crc kubenswrapper[4698]: I1006 11:55:47.575129 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" event={"ID":"d4be74b3-b8b9-45af-b971-bd29e82d0879","Type":"ContainerStarted","Data":"379125ac318d97138c2ae4fd39d74f224fa7f88070b6324278b19f372ffe3a39"} Oct 06 11:55:47 crc kubenswrapper[4698]: I1006 11:55:47.576365 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" event={"ID":"c711bfcd-11d2-4ad7-8059-9f1f406dd064","Type":"ContainerStarted","Data":"e0c95b4d49539caac348d42535396ca6169e9ed19603ba6392c56ee17fcc1097"} Oct 06 11:55:47 crc kubenswrapper[4698]: I1006 11:55:47.577794 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn" event={"ID":"795598bd-9625-48d4-8b2b-9d5d5418391a","Type":"ContainerStarted","Data":"d4fdb278a82b3bc53136243ebb7098e8cd13d94821df82885c92cf9250f20c9d"} Oct 06 11:55:47 crc kubenswrapper[4698]: I1006 11:55:47.578979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" event={"ID":"bc83ee37-67c1-4393-83c8-9ee46b2c1d30","Type":"ContainerStarted","Data":"aa2b975785d2acb14ff3c619dfc865d210f43e6c6f9d01ad799241a6a434e418"} Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.726491 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" event={"ID":"c711bfcd-11d2-4ad7-8059-9f1f406dd064","Type":"ContainerStarted","Data":"372fd6f1b58ab02780aaf12aad8de08b13db5809561454d0de35b84b6757e5a1"} Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.727477 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.729183 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn" event={"ID":"795598bd-9625-48d4-8b2b-9d5d5418391a","Type":"ContainerStarted","Data":"87b7a1c0494477f4da16a2a72bd1ec68d922ac1d3e11acf64f641e5b7263ee9c"} Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.729451 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.731050 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" event={"ID":"bc83ee37-67c1-4393-83c8-9ee46b2c1d30","Type":"ContainerStarted","Data":"0677ae7c1b74ee113b7bb7484b76bf0bd98a437d512f23c5c969f720febf2c2c"} Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.731241 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.732867 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" event={"ID":"2986e2db-d42d-417a-b203-1eb36ae90468","Type":"ContainerStarted","Data":"304c89fc7167577a30073b4866d5128cfc88324107efc17c320bf982a1d51cc8"} Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.739486 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" event={"ID":"d4be74b3-b8b9-45af-b971-bd29e82d0879","Type":"ContainerStarted","Data":"c18060d7f3fb862666d9109142a0961302827e0c96604cd04685946305b21ed9"} Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.774749 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-cc5f78dfc-9sp88" podStartSLOduration=1.943197118 podStartE2EDuration="15.774727568s" podCreationTimestamp="2025-10-06 11:55:46 +0000 UTC" firstStartedPulling="2025-10-06 11:55:47.131835733 +0000 UTC m=+634.544527896" lastFinishedPulling="2025-10-06 11:56:00.963366173 +0000 UTC m=+648.376058346" observedRunningTime="2025-10-06 11:56:01.757102515 +0000 UTC m=+649.169794688" watchObservedRunningTime="2025-10-06 11:56:01.774727568 +0000 UTC m=+649.187419741" Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.808491 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx" podStartSLOduration=1.841400316 podStartE2EDuration="15.808471316s" podCreationTimestamp="2025-10-06 11:55:46 +0000 UTC" firstStartedPulling="2025-10-06 11:55:46.886446236 +0000 UTC m=+634.299138409" lastFinishedPulling="2025-10-06 11:56:00.853517236 +0000 UTC m=+648.266209409" observedRunningTime="2025-10-06 11:56:01.806447347 +0000 UTC m=+649.219139520" watchObservedRunningTime="2025-10-06 11:56:01.808471316 +0000 UTC m=+649.221163489" Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.852129 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j" podStartSLOduration=1.915512708 podStartE2EDuration="15.852110548s" podCreationTimestamp="2025-10-06 11:55:46 +0000 UTC" firstStartedPulling="2025-10-06 11:55:46.938474075 +0000 UTC m=+634.351166248" lastFinishedPulling="2025-10-06 11:56:00.875071915 +0000 UTC m=+648.287764088" observedRunningTime="2025-10-06 11:56:01.850304314 +0000 UTC m=+649.262996487" watchObservedRunningTime="2025-10-06 11:56:01.852110548 +0000 UTC m=+649.264802711" Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.876904 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" podStartSLOduration=2.223361718 podStartE2EDuration="15.876886777s" podCreationTimestamp="2025-10-06 11:55:46 +0000 UTC" firstStartedPulling="2025-10-06 11:55:47.257295074 +0000 UTC m=+634.669987247" lastFinishedPulling="2025-10-06 11:56:00.910820133 +0000 UTC m=+648.323512306" observedRunningTime="2025-10-06 11:56:01.875557344 +0000 UTC m=+649.288249517" watchObservedRunningTime="2025-10-06 11:56:01.876886777 +0000 UTC m=+649.289578950" Oct 06 11:56:01 crc kubenswrapper[4698]: I1006 11:56:01.903654 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-4vjwn" podStartSLOduration=1.803049955 podStartE2EDuration="15.903632253s" podCreationTimestamp="2025-10-06 11:55:46 +0000 UTC" firstStartedPulling="2025-10-06 11:55:46.78967031 +0000 UTC m=+634.202362483" lastFinishedPulling="2025-10-06 11:56:00.890252588 +0000 UTC m=+648.302944781" observedRunningTime="2025-10-06 11:56:01.902217649 +0000 UTC m=+649.314909842" watchObservedRunningTime="2025-10-06 11:56:01.903632253 +0000 UTC m=+649.316324426" Oct 06 11:56:07 crc kubenswrapper[4698]: I1006 11:56:07.037415 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-54bc95c9fb-gl2vd" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.603709 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv"] Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.606054 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.610656 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.620629 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv"] Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.624573 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.624644 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9824d\" (UniqueName: \"kubernetes.io/projected/f81cd182-baf0-4779-8a64-b90655bb2275-kube-api-access-9824d\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.624744 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.726005 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.726085 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9824d\" (UniqueName: \"kubernetes.io/projected/f81cd182-baf0-4779-8a64-b90655bb2275-kube-api-access-9824d\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.726137 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.726770 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.726818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.754378 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9824d\" (UniqueName: \"kubernetes.io/projected/f81cd182-baf0-4779-8a64-b90655bb2275-kube-api-access-9824d\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:25 crc kubenswrapper[4698]: I1006 11:56:25.935279 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:26 crc kubenswrapper[4698]: I1006 11:56:26.441579 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv"] Oct 06 11:56:26 crc kubenswrapper[4698]: I1006 11:56:26.903655 4698 generic.go:334] "Generic (PLEG): container finished" podID="f81cd182-baf0-4779-8a64-b90655bb2275" containerID="4e121082ee6ecda6e6f3c37ee00be6a793bd376df4671472403306d679ddb4b3" exitCode=0 Oct 06 11:56:26 crc kubenswrapper[4698]: I1006 11:56:26.903726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" event={"ID":"f81cd182-baf0-4779-8a64-b90655bb2275","Type":"ContainerDied","Data":"4e121082ee6ecda6e6f3c37ee00be6a793bd376df4671472403306d679ddb4b3"} Oct 06 11:56:26 crc kubenswrapper[4698]: I1006 11:56:26.904089 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" event={"ID":"f81cd182-baf0-4779-8a64-b90655bb2275","Type":"ContainerStarted","Data":"2266771c0c02a4e84c21012a0aa758b7063f0475f3f4ebaac45ebdc7051daa83"} Oct 06 11:56:28 crc kubenswrapper[4698]: I1006 11:56:28.959811 4698 generic.go:334] "Generic (PLEG): container finished" podID="f81cd182-baf0-4779-8a64-b90655bb2275" containerID="b296f2d8081019d40edfa33d846323734fce4677c614917b3b63deac52dbf4d1" exitCode=0 Oct 06 11:56:28 crc kubenswrapper[4698]: I1006 11:56:28.959924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" event={"ID":"f81cd182-baf0-4779-8a64-b90655bb2275","Type":"ContainerDied","Data":"b296f2d8081019d40edfa33d846323734fce4677c614917b3b63deac52dbf4d1"} Oct 06 11:56:29 crc kubenswrapper[4698]: I1006 11:56:29.969319 4698 generic.go:334] "Generic (PLEG): container finished" podID="f81cd182-baf0-4779-8a64-b90655bb2275" containerID="0b3437ff6b35cec05bfc16387e935a8b3d44e6d85c7ab70980c53a1ed4f8b2e3" exitCode=0 Oct 06 11:56:29 crc kubenswrapper[4698]: I1006 11:56:29.969450 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" event={"ID":"f81cd182-baf0-4779-8a64-b90655bb2275","Type":"ContainerDied","Data":"0b3437ff6b35cec05bfc16387e935a8b3d44e6d85c7ab70980c53a1ed4f8b2e3"} Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.237471 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.420110 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9824d\" (UniqueName: \"kubernetes.io/projected/f81cd182-baf0-4779-8a64-b90655bb2275-kube-api-access-9824d\") pod \"f81cd182-baf0-4779-8a64-b90655bb2275\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.420196 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-bundle\") pod \"f81cd182-baf0-4779-8a64-b90655bb2275\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.420302 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-util\") pod \"f81cd182-baf0-4779-8a64-b90655bb2275\" (UID: \"f81cd182-baf0-4779-8a64-b90655bb2275\") " Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.422455 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-bundle" (OuterVolumeSpecName: "bundle") pod "f81cd182-baf0-4779-8a64-b90655bb2275" (UID: "f81cd182-baf0-4779-8a64-b90655bb2275"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.431534 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81cd182-baf0-4779-8a64-b90655bb2275-kube-api-access-9824d" (OuterVolumeSpecName: "kube-api-access-9824d") pod "f81cd182-baf0-4779-8a64-b90655bb2275" (UID: "f81cd182-baf0-4779-8a64-b90655bb2275"). InnerVolumeSpecName "kube-api-access-9824d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.451398 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-util" (OuterVolumeSpecName: "util") pod "f81cd182-baf0-4779-8a64-b90655bb2275" (UID: "f81cd182-baf0-4779-8a64-b90655bb2275"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.523085 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9824d\" (UniqueName: \"kubernetes.io/projected/f81cd182-baf0-4779-8a64-b90655bb2275-kube-api-access-9824d\") on node \"crc\" DevicePath \"\"" Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.523183 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.523311 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f81cd182-baf0-4779-8a64-b90655bb2275-util\") on node \"crc\" DevicePath \"\"" Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.992557 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" event={"ID":"f81cd182-baf0-4779-8a64-b90655bb2275","Type":"ContainerDied","Data":"2266771c0c02a4e84c21012a0aa758b7063f0475f3f4ebaac45ebdc7051daa83"} Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.992687 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv" Oct 06 11:56:31 crc kubenswrapper[4698]: I1006 11:56:31.992738 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2266771c0c02a4e84c21012a0aa758b7063f0475f3f4ebaac45ebdc7051daa83" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.076002 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx"] Oct 06 11:56:34 crc kubenswrapper[4698]: E1006 11:56:34.077621 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81cd182-baf0-4779-8a64-b90655bb2275" containerName="extract" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.077705 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81cd182-baf0-4779-8a64-b90655bb2275" containerName="extract" Oct 06 11:56:34 crc kubenswrapper[4698]: E1006 11:56:34.077765 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81cd182-baf0-4779-8a64-b90655bb2275" containerName="pull" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.077811 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81cd182-baf0-4779-8a64-b90655bb2275" containerName="pull" Oct 06 11:56:34 crc kubenswrapper[4698]: E1006 11:56:34.077869 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81cd182-baf0-4779-8a64-b90655bb2275" containerName="util" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.077921 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81cd182-baf0-4779-8a64-b90655bb2275" containerName="util" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.078110 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81cd182-baf0-4779-8a64-b90655bb2275" containerName="extract" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.078763 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.082404 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.082734 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.082816 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bd66q" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.088677 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx"] Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.266780 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdhqz\" (UniqueName: \"kubernetes.io/projected/143963ef-7761-472a-b173-7407f5b7befb-kube-api-access-bdhqz\") pod \"nmstate-operator-858ddd8f98-dlzmx\" (UID: \"143963ef-7761-472a-b173-7407f5b7befb\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.369169 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdhqz\" (UniqueName: \"kubernetes.io/projected/143963ef-7761-472a-b173-7407f5b7befb-kube-api-access-bdhqz\") pod \"nmstate-operator-858ddd8f98-dlzmx\" (UID: \"143963ef-7761-472a-b173-7407f5b7befb\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.396418 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdhqz\" (UniqueName: \"kubernetes.io/projected/143963ef-7761-472a-b173-7407f5b7befb-kube-api-access-bdhqz\") pod \"nmstate-operator-858ddd8f98-dlzmx\" (UID: \"143963ef-7761-472a-b173-7407f5b7befb\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.695312 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx" Oct 06 11:56:34 crc kubenswrapper[4698]: I1006 11:56:34.990257 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx"] Oct 06 11:56:35 crc kubenswrapper[4698]: I1006 11:56:35.015344 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx" event={"ID":"143963ef-7761-472a-b173-7407f5b7befb","Type":"ContainerStarted","Data":"ea5c8422a7ffe1c8a13c3a9e9c3435921b8292aa73fd47d5028ac139372b7e3c"} Oct 06 11:56:39 crc kubenswrapper[4698]: I1006 11:56:39.056083 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx" event={"ID":"143963ef-7761-472a-b173-7407f5b7befb","Type":"ContainerStarted","Data":"b3b9e5ee4f4e97d6adec51fc47251240cdc0dc3155a574dbb2ff0d041fd58698"} Oct 06 11:56:39 crc kubenswrapper[4698]: I1006 11:56:39.093776 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-dlzmx" podStartSLOduration=1.71373123 podStartE2EDuration="5.093750708s" podCreationTimestamp="2025-10-06 11:56:34 +0000 UTC" firstStartedPulling="2025-10-06 11:56:35.001641294 +0000 UTC m=+682.414333467" lastFinishedPulling="2025-10-06 11:56:38.381660732 +0000 UTC m=+685.794352945" observedRunningTime="2025-10-06 11:56:39.088174167 +0000 UTC m=+686.500866420" watchObservedRunningTime="2025-10-06 11:56:39.093750708 +0000 UTC m=+686.506442921" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.131674 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml"] Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.133143 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.137487 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bn5z9" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.146889 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p"] Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.148185 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.150349 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.158783 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml"] Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.168045 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p"] Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.188371 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-kgvmq"] Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.189778 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.263586 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnmps\" (UniqueName: \"kubernetes.io/projected/3fd88842-bd7f-4a22-9289-55f917571cbf-kube-api-access-wnmps\") pod \"nmstate-metrics-fdff9cb8d-hvpml\" (UID: \"3fd88842-bd7f-4a22-9289-55f917571cbf\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.263725 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/90215016-8b6e-445d-a43a-d87dafd57bf2-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-w6q8p\" (UID: \"90215016-8b6e-445d-a43a-d87dafd57bf2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.263782 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jltbj\" (UniqueName: \"kubernetes.io/projected/90215016-8b6e-445d-a43a-d87dafd57bf2-kube-api-access-jltbj\") pod \"nmstate-webhook-6cdbc54649-w6q8p\" (UID: \"90215016-8b6e-445d-a43a-d87dafd57bf2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.328388 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488"] Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.329733 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.333462 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.333567 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-gj2lc" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.338032 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.342159 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488"] Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.365088 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jltbj\" (UniqueName: \"kubernetes.io/projected/90215016-8b6e-445d-a43a-d87dafd57bf2-kube-api-access-jltbj\") pod \"nmstate-webhook-6cdbc54649-w6q8p\" (UID: \"90215016-8b6e-445d-a43a-d87dafd57bf2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.365195 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnmps\" (UniqueName: \"kubernetes.io/projected/3fd88842-bd7f-4a22-9289-55f917571cbf-kube-api-access-wnmps\") pod \"nmstate-metrics-fdff9cb8d-hvpml\" (UID: \"3fd88842-bd7f-4a22-9289-55f917571cbf\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.365232 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e62b6cf6-ece4-46f0-9aba-887633daf472-dbus-socket\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.365289 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e62b6cf6-ece4-46f0-9aba-887633daf472-nmstate-lock\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.365314 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e62b6cf6-ece4-46f0-9aba-887633daf472-ovs-socket\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.365356 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-495pn\" (UniqueName: \"kubernetes.io/projected/e62b6cf6-ece4-46f0-9aba-887633daf472-kube-api-access-495pn\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.365380 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/90215016-8b6e-445d-a43a-d87dafd57bf2-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-w6q8p\" (UID: \"90215016-8b6e-445d-a43a-d87dafd57bf2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.376469 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/90215016-8b6e-445d-a43a-d87dafd57bf2-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-w6q8p\" (UID: \"90215016-8b6e-445d-a43a-d87dafd57bf2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.384299 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jltbj\" (UniqueName: \"kubernetes.io/projected/90215016-8b6e-445d-a43a-d87dafd57bf2-kube-api-access-jltbj\") pod \"nmstate-webhook-6cdbc54649-w6q8p\" (UID: \"90215016-8b6e-445d-a43a-d87dafd57bf2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.386221 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnmps\" (UniqueName: \"kubernetes.io/projected/3fd88842-bd7f-4a22-9289-55f917571cbf-kube-api-access-wnmps\") pod \"nmstate-metrics-fdff9cb8d-hvpml\" (UID: \"3fd88842-bd7f-4a22-9289-55f917571cbf\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.454402 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.467604 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1028a8b-c391-4df6-978c-83a168615335-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-4b488\" (UID: \"d1028a8b-c391-4df6-978c-83a168615335\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.467695 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e62b6cf6-ece4-46f0-9aba-887633daf472-dbus-socket\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.467747 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d1028a8b-c391-4df6-978c-83a168615335-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-4b488\" (UID: \"d1028a8b-c391-4df6-978c-83a168615335\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.467777 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e62b6cf6-ece4-46f0-9aba-887633daf472-nmstate-lock\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.467776 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.467821 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e62b6cf6-ece4-46f0-9aba-887633daf472-ovs-socket\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.467857 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-495pn\" (UniqueName: \"kubernetes.io/projected/e62b6cf6-ece4-46f0-9aba-887633daf472-kube-api-access-495pn\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.467915 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s476g\" (UniqueName: \"kubernetes.io/projected/d1028a8b-c391-4df6-978c-83a168615335-kube-api-access-s476g\") pod \"nmstate-console-plugin-6b874cbd85-4b488\" (UID: \"d1028a8b-c391-4df6-978c-83a168615335\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.468429 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e62b6cf6-ece4-46f0-9aba-887633daf472-nmstate-lock\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.468488 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e62b6cf6-ece4-46f0-9aba-887633daf472-ovs-socket\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.468605 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e62b6cf6-ece4-46f0-9aba-887633daf472-dbus-socket\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.489794 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-495pn\" (UniqueName: \"kubernetes.io/projected/e62b6cf6-ece4-46f0-9aba-887633daf472-kube-api-access-495pn\") pod \"nmstate-handler-kgvmq\" (UID: \"e62b6cf6-ece4-46f0-9aba-887633daf472\") " pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.512375 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.539889 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5bd46cddf-xjb96"] Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.540760 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: W1006 11:56:40.555746 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode62b6cf6_ece4_46f0_9aba_887633daf472.slice/crio-934dca38ba4f3acdd23f21567dfbba2736d5a3681082bf1d0f4a06af37b5e06f WatchSource:0}: Error finding container 934dca38ba4f3acdd23f21567dfbba2736d5a3681082bf1d0f4a06af37b5e06f: Status 404 returned error can't find the container with id 934dca38ba4f3acdd23f21567dfbba2736d5a3681082bf1d0f4a06af37b5e06f Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.557422 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bd46cddf-xjb96"] Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.569835 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-oauth-serving-cert\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.569890 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-console-config\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.569931 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s476g\" (UniqueName: \"kubernetes.io/projected/d1028a8b-c391-4df6-978c-83a168615335-kube-api-access-s476g\") pod \"nmstate-console-plugin-6b874cbd85-4b488\" (UID: \"d1028a8b-c391-4df6-978c-83a168615335\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.569950 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-trusted-ca-bundle\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.569983 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl4l2\" (UniqueName: \"kubernetes.io/projected/d62f8b04-34f9-49bc-8f90-2dc683686f26-kube-api-access-pl4l2\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.570001 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-service-ca\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.570084 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1028a8b-c391-4df6-978c-83a168615335-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-4b488\" (UID: \"d1028a8b-c391-4df6-978c-83a168615335\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.570109 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d62f8b04-34f9-49bc-8f90-2dc683686f26-console-oauth-config\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.570127 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d62f8b04-34f9-49bc-8f90-2dc683686f26-console-serving-cert\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.570150 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d1028a8b-c391-4df6-978c-83a168615335-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-4b488\" (UID: \"d1028a8b-c391-4df6-978c-83a168615335\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.571033 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d1028a8b-c391-4df6-978c-83a168615335-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-4b488\" (UID: \"d1028a8b-c391-4df6-978c-83a168615335\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.575450 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1028a8b-c391-4df6-978c-83a168615335-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-4b488\" (UID: \"d1028a8b-c391-4df6-978c-83a168615335\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.595741 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s476g\" (UniqueName: \"kubernetes.io/projected/d1028a8b-c391-4df6-978c-83a168615335-kube-api-access-s476g\") pod \"nmstate-console-plugin-6b874cbd85-4b488\" (UID: \"d1028a8b-c391-4df6-978c-83a168615335\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.646106 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.670899 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d62f8b04-34f9-49bc-8f90-2dc683686f26-console-oauth-config\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.670948 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d62f8b04-34f9-49bc-8f90-2dc683686f26-console-serving-cert\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.670977 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-oauth-serving-cert\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.671031 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-console-config\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.671061 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-trusted-ca-bundle\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.671094 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl4l2\" (UniqueName: \"kubernetes.io/projected/d62f8b04-34f9-49bc-8f90-2dc683686f26-kube-api-access-pl4l2\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.671112 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-service-ca\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.672563 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-service-ca\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.672936 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-oauth-serving-cert\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.674792 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-console-config\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.677567 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d62f8b04-34f9-49bc-8f90-2dc683686f26-trusted-ca-bundle\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.685180 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d62f8b04-34f9-49bc-8f90-2dc683686f26-console-oauth-config\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.685478 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d62f8b04-34f9-49bc-8f90-2dc683686f26-console-serving-cert\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.690385 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl4l2\" (UniqueName: \"kubernetes.io/projected/d62f8b04-34f9-49bc-8f90-2dc683686f26-kube-api-access-pl4l2\") pod \"console-5bd46cddf-xjb96\" (UID: \"d62f8b04-34f9-49bc-8f90-2dc683686f26\") " pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.872005 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488"] Oct 06 11:56:40 crc kubenswrapper[4698]: I1006 11:56:40.886467 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:40 crc kubenswrapper[4698]: W1006 11:56:40.905281 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1028a8b_c391_4df6_978c_83a168615335.slice/crio-c0c2ce4e6d5d961c46321370e71494f9b4a54ab6b5994e1f698bb5ad359acef1 WatchSource:0}: Error finding container c0c2ce4e6d5d961c46321370e71494f9b4a54ab6b5994e1f698bb5ad359acef1: Status 404 returned error can't find the container with id c0c2ce4e6d5d961c46321370e71494f9b4a54ab6b5994e1f698bb5ad359acef1 Oct 06 11:56:41 crc kubenswrapper[4698]: I1006 11:56:41.040397 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p"] Oct 06 11:56:41 crc kubenswrapper[4698]: I1006 11:56:41.045766 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml"] Oct 06 11:56:41 crc kubenswrapper[4698]: I1006 11:56:41.091955 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" event={"ID":"d1028a8b-c391-4df6-978c-83a168615335","Type":"ContainerStarted","Data":"c0c2ce4e6d5d961c46321370e71494f9b4a54ab6b5994e1f698bb5ad359acef1"} Oct 06 11:56:41 crc kubenswrapper[4698]: I1006 11:56:41.093991 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" event={"ID":"90215016-8b6e-445d-a43a-d87dafd57bf2","Type":"ContainerStarted","Data":"90ddd7511a9934bd7a553ee48d4711c7a35fb25861168455ea68ebf06c814064"} Oct 06 11:56:41 crc kubenswrapper[4698]: I1006 11:56:41.098388 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kgvmq" event={"ID":"e62b6cf6-ece4-46f0-9aba-887633daf472","Type":"ContainerStarted","Data":"934dca38ba4f3acdd23f21567dfbba2736d5a3681082bf1d0f4a06af37b5e06f"} Oct 06 11:56:41 crc kubenswrapper[4698]: I1006 11:56:41.103511 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml" event={"ID":"3fd88842-bd7f-4a22-9289-55f917571cbf","Type":"ContainerStarted","Data":"48e9e6cd61becf8f4a2ac3b993b38aaa3531a6c3db91edf7d7294e0954dbe047"} Oct 06 11:56:41 crc kubenswrapper[4698]: I1006 11:56:41.189105 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bd46cddf-xjb96"] Oct 06 11:56:42 crc kubenswrapper[4698]: I1006 11:56:42.114699 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bd46cddf-xjb96" event={"ID":"d62f8b04-34f9-49bc-8f90-2dc683686f26","Type":"ContainerStarted","Data":"45111da59aeeb91c64ba6b92fe10025e088e08e1d7e82fde960626e8a5e0cf8b"} Oct 06 11:56:42 crc kubenswrapper[4698]: I1006 11:56:42.115298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bd46cddf-xjb96" event={"ID":"d62f8b04-34f9-49bc-8f90-2dc683686f26","Type":"ContainerStarted","Data":"4795750e3545bb9b71b18b305c1f450814dbe33086f7f8a910c1d7d0e4b7d34b"} Oct 06 11:56:42 crc kubenswrapper[4698]: I1006 11:56:42.145487 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5bd46cddf-xjb96" podStartSLOduration=2.145456754 podStartE2EDuration="2.145456754s" podCreationTimestamp="2025-10-06 11:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:56:42.139814971 +0000 UTC m=+689.552507174" watchObservedRunningTime="2025-10-06 11:56:42.145456754 +0000 UTC m=+689.558148967" Oct 06 11:56:45 crc kubenswrapper[4698]: I1006 11:56:45.137491 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml" event={"ID":"3fd88842-bd7f-4a22-9289-55f917571cbf","Type":"ContainerStarted","Data":"357d0a65b2f70dd9b548e5519916656892d4aae175f2ccb6cb99e5b4eebc59dc"} Oct 06 11:56:45 crc kubenswrapper[4698]: I1006 11:56:45.140324 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" event={"ID":"d1028a8b-c391-4df6-978c-83a168615335","Type":"ContainerStarted","Data":"acaee505b9c506ffa8b77ebb30e0cf7dfb5947c341e62d6b0e3a8bc16f7d90f9"} Oct 06 11:56:45 crc kubenswrapper[4698]: I1006 11:56:45.141742 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" event={"ID":"90215016-8b6e-445d-a43a-d87dafd57bf2","Type":"ContainerStarted","Data":"9961b6201b9233a544b351b281758f1f7c14c46bea97298cdc89e9f2f4e308ef"} Oct 06 11:56:45 crc kubenswrapper[4698]: I1006 11:56:45.141868 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" Oct 06 11:56:45 crc kubenswrapper[4698]: I1006 11:56:45.145157 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kgvmq" event={"ID":"e62b6cf6-ece4-46f0-9aba-887633daf472","Type":"ContainerStarted","Data":"1863640b30a4ecd437c675ece72baf39c4f3795c0b71ce6cd22410d964b519d6"} Oct 06 11:56:45 crc kubenswrapper[4698]: I1006 11:56:45.145472 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:45 crc kubenswrapper[4698]: I1006 11:56:45.167976 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-4b488" podStartSLOduration=2.136536699 podStartE2EDuration="5.167946862s" podCreationTimestamp="2025-10-06 11:56:40 +0000 UTC" firstStartedPulling="2025-10-06 11:56:40.949286597 +0000 UTC m=+688.361978770" lastFinishedPulling="2025-10-06 11:56:43.98069674 +0000 UTC m=+691.393388933" observedRunningTime="2025-10-06 11:56:45.159276862 +0000 UTC m=+692.571969035" watchObservedRunningTime="2025-10-06 11:56:45.167946862 +0000 UTC m=+692.580639035" Oct 06 11:56:45 crc kubenswrapper[4698]: I1006 11:56:45.185688 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-kgvmq" podStartSLOduration=1.774887885 podStartE2EDuration="5.18565396s" podCreationTimestamp="2025-10-06 11:56:40 +0000 UTC" firstStartedPulling="2025-10-06 11:56:40.571994347 +0000 UTC m=+687.984686520" lastFinishedPulling="2025-10-06 11:56:43.982760422 +0000 UTC m=+691.395452595" observedRunningTime="2025-10-06 11:56:45.184753897 +0000 UTC m=+692.597446160" watchObservedRunningTime="2025-10-06 11:56:45.18565396 +0000 UTC m=+692.598346183" Oct 06 11:56:45 crc kubenswrapper[4698]: I1006 11:56:45.217778 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" podStartSLOduration=2.301710684 podStartE2EDuration="5.21775451s" podCreationTimestamp="2025-10-06 11:56:40 +0000 UTC" firstStartedPulling="2025-10-06 11:56:41.064568902 +0000 UTC m=+688.477261075" lastFinishedPulling="2025-10-06 11:56:43.980612718 +0000 UTC m=+691.393304901" observedRunningTime="2025-10-06 11:56:45.214394386 +0000 UTC m=+692.627086589" watchObservedRunningTime="2025-10-06 11:56:45.21775451 +0000 UTC m=+692.630446683" Oct 06 11:56:48 crc kubenswrapper[4698]: I1006 11:56:48.170920 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml" event={"ID":"3fd88842-bd7f-4a22-9289-55f917571cbf","Type":"ContainerStarted","Data":"bde855411132e61f4c90dc8ec71db9bfbbd0dc186c7e3c89b860ace36427c88a"} Oct 06 11:56:48 crc kubenswrapper[4698]: I1006 11:56:48.199413 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-hvpml" podStartSLOduration=1.78738632 podStartE2EDuration="8.199389555s" podCreationTimestamp="2025-10-06 11:56:40 +0000 UTC" firstStartedPulling="2025-10-06 11:56:41.060190782 +0000 UTC m=+688.472882955" lastFinishedPulling="2025-10-06 11:56:47.472194017 +0000 UTC m=+694.884886190" observedRunningTime="2025-10-06 11:56:48.198183194 +0000 UTC m=+695.610875437" watchObservedRunningTime="2025-10-06 11:56:48.199389555 +0000 UTC m=+695.612081728" Oct 06 11:56:50 crc kubenswrapper[4698]: I1006 11:56:50.548459 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-kgvmq" Oct 06 11:56:50 crc kubenswrapper[4698]: I1006 11:56:50.887874 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:50 crc kubenswrapper[4698]: I1006 11:56:50.887960 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:50 crc kubenswrapper[4698]: I1006 11:56:50.893440 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:51 crc kubenswrapper[4698]: I1006 11:56:51.214421 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5bd46cddf-xjb96" Oct 06 11:56:51 crc kubenswrapper[4698]: I1006 11:56:51.279567 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dtbvf"] Oct 06 11:56:55 crc kubenswrapper[4698]: I1006 11:56:55.234906 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:56:55 crc kubenswrapper[4698]: I1006 11:56:55.235544 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:57:00 crc kubenswrapper[4698]: I1006 11:57:00.479670 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-w6q8p" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.336553 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-dtbvf" podUID="dc33924c-840f-497c-ad04-657d6fa573a9" containerName="console" containerID="cri-o://7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c" gracePeriod=15 Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.737451 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dtbvf_dc33924c-840f-497c-ad04-657d6fa573a9/console/0.log" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.738077 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.755957 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-oauth-serving-cert\") pod \"dc33924c-840f-497c-ad04-657d6fa573a9\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.756043 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-trusted-ca-bundle\") pod \"dc33924c-840f-497c-ad04-657d6fa573a9\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.756083 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phznf\" (UniqueName: \"kubernetes.io/projected/dc33924c-840f-497c-ad04-657d6fa573a9-kube-api-access-phznf\") pod \"dc33924c-840f-497c-ad04-657d6fa573a9\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.756235 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-serving-cert\") pod \"dc33924c-840f-497c-ad04-657d6fa573a9\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.756301 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-console-config\") pod \"dc33924c-840f-497c-ad04-657d6fa573a9\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.756364 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-oauth-config\") pod \"dc33924c-840f-497c-ad04-657d6fa573a9\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.756414 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-service-ca\") pod \"dc33924c-840f-497c-ad04-657d6fa573a9\" (UID: \"dc33924c-840f-497c-ad04-657d6fa573a9\") " Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.757722 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-service-ca" (OuterVolumeSpecName: "service-ca") pod "dc33924c-840f-497c-ad04-657d6fa573a9" (UID: "dc33924c-840f-497c-ad04-657d6fa573a9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.758521 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-console-config" (OuterVolumeSpecName: "console-config") pod "dc33924c-840f-497c-ad04-657d6fa573a9" (UID: "dc33924c-840f-497c-ad04-657d6fa573a9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.759206 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dc33924c-840f-497c-ad04-657d6fa573a9" (UID: "dc33924c-840f-497c-ad04-657d6fa573a9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.761670 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dc33924c-840f-497c-ad04-657d6fa573a9" (UID: "dc33924c-840f-497c-ad04-657d6fa573a9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.765807 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc33924c-840f-497c-ad04-657d6fa573a9-kube-api-access-phznf" (OuterVolumeSpecName: "kube-api-access-phznf") pod "dc33924c-840f-497c-ad04-657d6fa573a9" (UID: "dc33924c-840f-497c-ad04-657d6fa573a9"). InnerVolumeSpecName "kube-api-access-phznf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.779822 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dc33924c-840f-497c-ad04-657d6fa573a9" (UID: "dc33924c-840f-497c-ad04-657d6fa573a9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.791323 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dc33924c-840f-497c-ad04-657d6fa573a9" (UID: "dc33924c-840f-497c-ad04-657d6fa573a9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.859517 4698 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-console-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.859547 4698 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.859559 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.859572 4698 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.859582 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc33924c-840f-497c-ad04-657d6fa573a9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.859592 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phznf\" (UniqueName: \"kubernetes.io/projected/dc33924c-840f-497c-ad04-657d6fa573a9-kube-api-access-phznf\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:16 crc kubenswrapper[4698]: I1006 11:57:16.859602 4698 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc33924c-840f-497c-ad04-657d6fa573a9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:17 crc kubenswrapper[4698]: I1006 11:57:17.410695 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dtbvf_dc33924c-840f-497c-ad04-657d6fa573a9/console/0.log" Oct 06 11:57:17 crc kubenswrapper[4698]: I1006 11:57:17.410754 4698 generic.go:334] "Generic (PLEG): container finished" podID="dc33924c-840f-497c-ad04-657d6fa573a9" containerID="7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c" exitCode=2 Oct 06 11:57:17 crc kubenswrapper[4698]: I1006 11:57:17.410792 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dtbvf" event={"ID":"dc33924c-840f-497c-ad04-657d6fa573a9","Type":"ContainerDied","Data":"7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c"} Oct 06 11:57:17 crc kubenswrapper[4698]: I1006 11:57:17.410827 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dtbvf" event={"ID":"dc33924c-840f-497c-ad04-657d6fa573a9","Type":"ContainerDied","Data":"60b3bcc07e486bdd7bc488a7d872179b8a711a2e2cf664f1891a75541c1003bc"} Oct 06 11:57:17 crc kubenswrapper[4698]: I1006 11:57:17.410847 4698 scope.go:117] "RemoveContainer" containerID="7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c" Oct 06 11:57:17 crc kubenswrapper[4698]: I1006 11:57:17.410976 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dtbvf" Oct 06 11:57:17 crc kubenswrapper[4698]: I1006 11:57:17.433508 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dtbvf"] Oct 06 11:57:17 crc kubenswrapper[4698]: I1006 11:57:17.435610 4698 scope.go:117] "RemoveContainer" containerID="7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c" Oct 06 11:57:17 crc kubenswrapper[4698]: E1006 11:57:17.436349 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c\": container with ID starting with 7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c not found: ID does not exist" containerID="7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c" Oct 06 11:57:17 crc kubenswrapper[4698]: I1006 11:57:17.436416 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c"} err="failed to get container status \"7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c\": rpc error: code = NotFound desc = could not find container \"7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c\": container with ID starting with 7fa0723ca5f5a69c7b7ec6d18899bd544a2b5d3ca8692a50d33beded2bb5989c not found: ID does not exist" Oct 06 11:57:17 crc kubenswrapper[4698]: I1006 11:57:17.438173 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-dtbvf"] Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.374390 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5"] Oct 06 11:57:18 crc kubenswrapper[4698]: E1006 11:57:18.374935 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc33924c-840f-497c-ad04-657d6fa573a9" containerName="console" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.374949 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc33924c-840f-497c-ad04-657d6fa573a9" containerName="console" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.375097 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc33924c-840f-497c-ad04-657d6fa573a9" containerName="console" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.375964 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.378438 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.390680 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5"] Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.486117 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.486237 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn6sr\" (UniqueName: \"kubernetes.io/projected/0ece3450-b435-4ef3-ac92-2596540f52d7-kube-api-access-gn6sr\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.486384 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.587642 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.587717 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn6sr\" (UniqueName: \"kubernetes.io/projected/0ece3450-b435-4ef3-ac92-2596540f52d7-kube-api-access-gn6sr\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.587752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.588367 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.588493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.610077 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn6sr\" (UniqueName: \"kubernetes.io/projected/0ece3450-b435-4ef3-ac92-2596540f52d7-kube-api-access-gn6sr\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:18 crc kubenswrapper[4698]: I1006 11:57:18.694387 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:19 crc kubenswrapper[4698]: I1006 11:57:19.171776 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5"] Oct 06 11:57:19 crc kubenswrapper[4698]: I1006 11:57:19.343389 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc33924c-840f-497c-ad04-657d6fa573a9" path="/var/lib/kubelet/pods/dc33924c-840f-497c-ad04-657d6fa573a9/volumes" Oct 06 11:57:19 crc kubenswrapper[4698]: I1006 11:57:19.436864 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" event={"ID":"0ece3450-b435-4ef3-ac92-2596540f52d7","Type":"ContainerStarted","Data":"8ca24c427df9eefb67746bb56dda1be6d3212418a3a8dfcea66c633384dec772"} Oct 06 11:57:19 crc kubenswrapper[4698]: I1006 11:57:19.436933 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" event={"ID":"0ece3450-b435-4ef3-ac92-2596540f52d7","Type":"ContainerStarted","Data":"b807a37f0c881126a3fb1ac374320a702ae78b4b8073b22707c619fb52ff41be"} Oct 06 11:57:20 crc kubenswrapper[4698]: I1006 11:57:20.446525 4698 generic.go:334] "Generic (PLEG): container finished" podID="0ece3450-b435-4ef3-ac92-2596540f52d7" containerID="8ca24c427df9eefb67746bb56dda1be6d3212418a3a8dfcea66c633384dec772" exitCode=0 Oct 06 11:57:20 crc kubenswrapper[4698]: I1006 11:57:20.446617 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" event={"ID":"0ece3450-b435-4ef3-ac92-2596540f52d7","Type":"ContainerDied","Data":"8ca24c427df9eefb67746bb56dda1be6d3212418a3a8dfcea66c633384dec772"} Oct 06 11:57:22 crc kubenswrapper[4698]: I1006 11:57:22.471260 4698 generic.go:334] "Generic (PLEG): container finished" podID="0ece3450-b435-4ef3-ac92-2596540f52d7" containerID="3e5e8c80dc2a46720167f613a27aed007f70d754a4b3766820de5f043f2ed1ca" exitCode=0 Oct 06 11:57:22 crc kubenswrapper[4698]: I1006 11:57:22.471655 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" event={"ID":"0ece3450-b435-4ef3-ac92-2596540f52d7","Type":"ContainerDied","Data":"3e5e8c80dc2a46720167f613a27aed007f70d754a4b3766820de5f043f2ed1ca"} Oct 06 11:57:23 crc kubenswrapper[4698]: I1006 11:57:23.484246 4698 generic.go:334] "Generic (PLEG): container finished" podID="0ece3450-b435-4ef3-ac92-2596540f52d7" containerID="6efa57a70e5fb91812b12df102e21367f1e951cc4cdad9aac80c8bd7d0dcd7e3" exitCode=0 Oct 06 11:57:23 crc kubenswrapper[4698]: I1006 11:57:23.484355 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" event={"ID":"0ece3450-b435-4ef3-ac92-2596540f52d7","Type":"ContainerDied","Data":"6efa57a70e5fb91812b12df102e21367f1e951cc4cdad9aac80c8bd7d0dcd7e3"} Oct 06 11:57:24 crc kubenswrapper[4698]: I1006 11:57:24.837773 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:24 crc kubenswrapper[4698]: I1006 11:57:24.896307 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn6sr\" (UniqueName: \"kubernetes.io/projected/0ece3450-b435-4ef3-ac92-2596540f52d7-kube-api-access-gn6sr\") pod \"0ece3450-b435-4ef3-ac92-2596540f52d7\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " Oct 06 11:57:24 crc kubenswrapper[4698]: I1006 11:57:24.896490 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-bundle\") pod \"0ece3450-b435-4ef3-ac92-2596540f52d7\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " Oct 06 11:57:24 crc kubenswrapper[4698]: I1006 11:57:24.896517 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-util\") pod \"0ece3450-b435-4ef3-ac92-2596540f52d7\" (UID: \"0ece3450-b435-4ef3-ac92-2596540f52d7\") " Oct 06 11:57:24 crc kubenswrapper[4698]: I1006 11:57:24.904820 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-bundle" (OuterVolumeSpecName: "bundle") pod "0ece3450-b435-4ef3-ac92-2596540f52d7" (UID: "0ece3450-b435-4ef3-ac92-2596540f52d7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:57:24 crc kubenswrapper[4698]: I1006 11:57:24.915338 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ece3450-b435-4ef3-ac92-2596540f52d7-kube-api-access-gn6sr" (OuterVolumeSpecName: "kube-api-access-gn6sr") pod "0ece3450-b435-4ef3-ac92-2596540f52d7" (UID: "0ece3450-b435-4ef3-ac92-2596540f52d7"). InnerVolumeSpecName "kube-api-access-gn6sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:57:24 crc kubenswrapper[4698]: I1006 11:57:24.921047 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-util" (OuterVolumeSpecName: "util") pod "0ece3450-b435-4ef3-ac92-2596540f52d7" (UID: "0ece3450-b435-4ef3-ac92-2596540f52d7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:57:24 crc kubenswrapper[4698]: I1006 11:57:24.998614 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-util\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:24 crc kubenswrapper[4698]: I1006 11:57:24.998668 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0ece3450-b435-4ef3-ac92-2596540f52d7-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:24 crc kubenswrapper[4698]: I1006 11:57:24.998680 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn6sr\" (UniqueName: \"kubernetes.io/projected/0ece3450-b435-4ef3-ac92-2596540f52d7-kube-api-access-gn6sr\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:25 crc kubenswrapper[4698]: I1006 11:57:25.235091 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:57:25 crc kubenswrapper[4698]: I1006 11:57:25.235615 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:57:25 crc kubenswrapper[4698]: I1006 11:57:25.507786 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" event={"ID":"0ece3450-b435-4ef3-ac92-2596540f52d7","Type":"ContainerDied","Data":"b807a37f0c881126a3fb1ac374320a702ae78b4b8073b22707c619fb52ff41be"} Oct 06 11:57:25 crc kubenswrapper[4698]: I1006 11:57:25.507866 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b807a37f0c881126a3fb1ac374320a702ae78b4b8073b22707c619fb52ff41be" Oct 06 11:57:25 crc kubenswrapper[4698]: I1006 11:57:25.508067 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.861937 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg"] Oct 06 11:57:33 crc kubenswrapper[4698]: E1006 11:57:33.863045 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ece3450-b435-4ef3-ac92-2596540f52d7" containerName="pull" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.863059 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ece3450-b435-4ef3-ac92-2596540f52d7" containerName="pull" Oct 06 11:57:33 crc kubenswrapper[4698]: E1006 11:57:33.863076 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ece3450-b435-4ef3-ac92-2596540f52d7" containerName="extract" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.863082 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ece3450-b435-4ef3-ac92-2596540f52d7" containerName="extract" Oct 06 11:57:33 crc kubenswrapper[4698]: E1006 11:57:33.863104 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ece3450-b435-4ef3-ac92-2596540f52d7" containerName="util" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.863110 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ece3450-b435-4ef3-ac92-2596540f52d7" containerName="util" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.863211 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ece3450-b435-4ef3-ac92-2596540f52d7" containerName="extract" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.863668 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.877995 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.878443 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fnv6v" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.878523 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.883508 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.888952 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg"] Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.899914 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.955947 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c5b34b7-49db-4807-8a96-dec961b07948-apiservice-cert\") pod \"metallb-operator-controller-manager-84678b9ffd-5hzgg\" (UID: \"9c5b34b7-49db-4807-8a96-dec961b07948\") " pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.956329 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c5b34b7-49db-4807-8a96-dec961b07948-webhook-cert\") pod \"metallb-operator-controller-manager-84678b9ffd-5hzgg\" (UID: \"9c5b34b7-49db-4807-8a96-dec961b07948\") " pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:33 crc kubenswrapper[4698]: I1006 11:57:33.956464 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfqkn\" (UniqueName: \"kubernetes.io/projected/9c5b34b7-49db-4807-8a96-dec961b07948-kube-api-access-zfqkn\") pod \"metallb-operator-controller-manager-84678b9ffd-5hzgg\" (UID: \"9c5b34b7-49db-4807-8a96-dec961b07948\") " pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.058186 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c5b34b7-49db-4807-8a96-dec961b07948-apiservice-cert\") pod \"metallb-operator-controller-manager-84678b9ffd-5hzgg\" (UID: \"9c5b34b7-49db-4807-8a96-dec961b07948\") " pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.058247 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c5b34b7-49db-4807-8a96-dec961b07948-webhook-cert\") pod \"metallb-operator-controller-manager-84678b9ffd-5hzgg\" (UID: \"9c5b34b7-49db-4807-8a96-dec961b07948\") " pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.058290 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfqkn\" (UniqueName: \"kubernetes.io/projected/9c5b34b7-49db-4807-8a96-dec961b07948-kube-api-access-zfqkn\") pod \"metallb-operator-controller-manager-84678b9ffd-5hzgg\" (UID: \"9c5b34b7-49db-4807-8a96-dec961b07948\") " pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.071165 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c5b34b7-49db-4807-8a96-dec961b07948-apiservice-cert\") pod \"metallb-operator-controller-manager-84678b9ffd-5hzgg\" (UID: \"9c5b34b7-49db-4807-8a96-dec961b07948\") " pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.071165 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c5b34b7-49db-4807-8a96-dec961b07948-webhook-cert\") pod \"metallb-operator-controller-manager-84678b9ffd-5hzgg\" (UID: \"9c5b34b7-49db-4807-8a96-dec961b07948\") " pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.086785 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfqkn\" (UniqueName: \"kubernetes.io/projected/9c5b34b7-49db-4807-8a96-dec961b07948-kube-api-access-zfqkn\") pod \"metallb-operator-controller-manager-84678b9ffd-5hzgg\" (UID: \"9c5b34b7-49db-4807-8a96-dec961b07948\") " pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.180824 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.299083 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-64685f8694-khhpz"] Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.299958 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.302762 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-rs998" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.303980 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.307976 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.316403 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-64685f8694-khhpz"] Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.364462 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9fbb\" (UniqueName: \"kubernetes.io/projected/70df7ae7-15a5-42ad-8db5-728081b24cd9-kube-api-access-t9fbb\") pod \"metallb-operator-webhook-server-64685f8694-khhpz\" (UID: \"70df7ae7-15a5-42ad-8db5-728081b24cd9\") " pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.364886 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70df7ae7-15a5-42ad-8db5-728081b24cd9-webhook-cert\") pod \"metallb-operator-webhook-server-64685f8694-khhpz\" (UID: \"70df7ae7-15a5-42ad-8db5-728081b24cd9\") " pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.364921 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70df7ae7-15a5-42ad-8db5-728081b24cd9-apiservice-cert\") pod \"metallb-operator-webhook-server-64685f8694-khhpz\" (UID: \"70df7ae7-15a5-42ad-8db5-728081b24cd9\") " pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.466559 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70df7ae7-15a5-42ad-8db5-728081b24cd9-apiservice-cert\") pod \"metallb-operator-webhook-server-64685f8694-khhpz\" (UID: \"70df7ae7-15a5-42ad-8db5-728081b24cd9\") " pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.466741 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9fbb\" (UniqueName: \"kubernetes.io/projected/70df7ae7-15a5-42ad-8db5-728081b24cd9-kube-api-access-t9fbb\") pod \"metallb-operator-webhook-server-64685f8694-khhpz\" (UID: \"70df7ae7-15a5-42ad-8db5-728081b24cd9\") " pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.466778 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70df7ae7-15a5-42ad-8db5-728081b24cd9-webhook-cert\") pod \"metallb-operator-webhook-server-64685f8694-khhpz\" (UID: \"70df7ae7-15a5-42ad-8db5-728081b24cd9\") " pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.478219 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70df7ae7-15a5-42ad-8db5-728081b24cd9-apiservice-cert\") pod \"metallb-operator-webhook-server-64685f8694-khhpz\" (UID: \"70df7ae7-15a5-42ad-8db5-728081b24cd9\") " pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.478941 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70df7ae7-15a5-42ad-8db5-728081b24cd9-webhook-cert\") pod \"metallb-operator-webhook-server-64685f8694-khhpz\" (UID: \"70df7ae7-15a5-42ad-8db5-728081b24cd9\") " pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.497862 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9fbb\" (UniqueName: \"kubernetes.io/projected/70df7ae7-15a5-42ad-8db5-728081b24cd9-kube-api-access-t9fbb\") pod \"metallb-operator-webhook-server-64685f8694-khhpz\" (UID: \"70df7ae7-15a5-42ad-8db5-728081b24cd9\") " pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.530211 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg"] Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.575429 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" event={"ID":"9c5b34b7-49db-4807-8a96-dec961b07948","Type":"ContainerStarted","Data":"2c557abaaf31ad730416b886bcd05ba87289eba9304e2d467ec5973cd56f2079"} Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.616782 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:34 crc kubenswrapper[4698]: I1006 11:57:34.905539 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-64685f8694-khhpz"] Oct 06 11:57:34 crc kubenswrapper[4698]: W1006 11:57:34.916671 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70df7ae7_15a5_42ad_8db5_728081b24cd9.slice/crio-4b9fe6707f98fa4bd0f569ab21470e06c5e84a1aef1f20486ba57930441f94cd WatchSource:0}: Error finding container 4b9fe6707f98fa4bd0f569ab21470e06c5e84a1aef1f20486ba57930441f94cd: Status 404 returned error can't find the container with id 4b9fe6707f98fa4bd0f569ab21470e06c5e84a1aef1f20486ba57930441f94cd Oct 06 11:57:35 crc kubenswrapper[4698]: I1006 11:57:35.585783 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" event={"ID":"70df7ae7-15a5-42ad-8db5-728081b24cd9","Type":"ContainerStarted","Data":"4b9fe6707f98fa4bd0f569ab21470e06c5e84a1aef1f20486ba57930441f94cd"} Oct 06 11:57:38 crc kubenswrapper[4698]: I1006 11:57:38.915581 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7vqvf"] Oct 06 11:57:38 crc kubenswrapper[4698]: I1006 11:57:38.916764 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" podUID="4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" containerName="controller-manager" containerID="cri-o://f9973e5f96cbbf6edd31572849fec410f8dff1df0ca5aba557b583170e194c9e" gracePeriod=30 Oct 06 11:57:38 crc kubenswrapper[4698]: I1006 11:57:38.942962 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc"] Oct 06 11:57:38 crc kubenswrapper[4698]: I1006 11:57:38.943231 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" podUID="c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" containerName="route-controller-manager" containerID="cri-o://213d0f934b0205da4b5993ecf395b9a82a1d323205669605973a40461a462f47" gracePeriod=30 Oct 06 11:57:39 crc kubenswrapper[4698]: I1006 11:57:39.627282 4698 generic.go:334] "Generic (PLEG): container finished" podID="4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" containerID="f9973e5f96cbbf6edd31572849fec410f8dff1df0ca5aba557b583170e194c9e" exitCode=0 Oct 06 11:57:39 crc kubenswrapper[4698]: I1006 11:57:39.627356 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" event={"ID":"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1","Type":"ContainerDied","Data":"f9973e5f96cbbf6edd31572849fec410f8dff1df0ca5aba557b583170e194c9e"} Oct 06 11:57:39 crc kubenswrapper[4698]: I1006 11:57:39.634031 4698 generic.go:334] "Generic (PLEG): container finished" podID="c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" containerID="213d0f934b0205da4b5993ecf395b9a82a1d323205669605973a40461a462f47" exitCode=0 Oct 06 11:57:39 crc kubenswrapper[4698]: I1006 11:57:39.634092 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" event={"ID":"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2","Type":"ContainerDied","Data":"213d0f934b0205da4b5993ecf395b9a82a1d323205669605973a40461a462f47"} Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.378117 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.393145 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.422976 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6"] Oct 06 11:57:41 crc kubenswrapper[4698]: E1006 11:57:41.423375 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" containerName="route-controller-manager" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.423403 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" containerName="route-controller-manager" Oct 06 11:57:41 crc kubenswrapper[4698]: E1006 11:57:41.423428 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" containerName="controller-manager" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.423439 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" containerName="controller-manager" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.423573 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" containerName="controller-manager" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.423596 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" containerName="route-controller-manager" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.424154 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.455098 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6"] Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.478080 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt8h4\" (UniqueName: \"kubernetes.io/projected/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-kube-api-access-vt8h4\") pod \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.478152 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-config\") pod \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.478195 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-serving-cert\") pod \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.478314 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-client-ca\") pod \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\" (UID: \"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2\") " Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.479626 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-config" (OuterVolumeSpecName: "config") pod "c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" (UID: "c9ed6e65-56c6-41a6-a8f1-18df6ad920c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.480601 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-client-ca" (OuterVolumeSpecName: "client-ca") pod "c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" (UID: "c9ed6e65-56c6-41a6-a8f1-18df6ad920c2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.489443 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" (UID: "c9ed6e65-56c6-41a6-a8f1-18df6ad920c2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.496660 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-kube-api-access-vt8h4" (OuterVolumeSpecName: "kube-api-access-vt8h4") pod "c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" (UID: "c9ed6e65-56c6-41a6-a8f1-18df6ad920c2"). InnerVolumeSpecName "kube-api-access-vt8h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.579751 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-proxy-ca-bundles\") pod \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.579926 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-config\") pod \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.579954 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g9lh\" (UniqueName: \"kubernetes.io/projected/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-kube-api-access-4g9lh\") pod \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.579996 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-serving-cert\") pod \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.580048 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-client-ca\") pod \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\" (UID: \"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1\") " Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.580230 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzknj\" (UniqueName: \"kubernetes.io/projected/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-kube-api-access-rzknj\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.580275 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-config\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.580299 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-client-ca\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.580314 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-serving-cert\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.580609 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt8h4\" (UniqueName: \"kubernetes.io/projected/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-kube-api-access-vt8h4\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.580666 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.580681 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.580694 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.581075 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" (UID: "4e3a5503-7e56-4b44-a7e2-55909a3bbdf1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.581090 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-config" (OuterVolumeSpecName: "config") pod "4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" (UID: "4e3a5503-7e56-4b44-a7e2-55909a3bbdf1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.581139 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" (UID: "4e3a5503-7e56-4b44-a7e2-55909a3bbdf1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.584711 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-kube-api-access-4g9lh" (OuterVolumeSpecName: "kube-api-access-4g9lh") pod "4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" (UID: "4e3a5503-7e56-4b44-a7e2-55909a3bbdf1"). InnerVolumeSpecName "kube-api-access-4g9lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.585081 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" (UID: "4e3a5503-7e56-4b44-a7e2-55909a3bbdf1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.647272 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" event={"ID":"c9ed6e65-56c6-41a6-a8f1-18df6ad920c2","Type":"ContainerDied","Data":"a9ab363f9437c6c5190cda410ad14d826b5b62c6d3bd4cff0351405663019d9a"} Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.647341 4698 scope.go:117] "RemoveContainer" containerID="213d0f934b0205da4b5993ecf395b9a82a1d323205669605973a40461a462f47" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.647460 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.650886 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" event={"ID":"70df7ae7-15a5-42ad-8db5-728081b24cd9","Type":"ContainerStarted","Data":"405e6518e0f2df00e6de3646047ada4201833e6010cecb409bc6311724ba6485"} Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.651784 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.663951 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" event={"ID":"9c5b34b7-49db-4807-8a96-dec961b07948","Type":"ContainerStarted","Data":"b26f0b1260260e1af9d84b69a53bdfcf8e6e58423fdee404ccc5e49f7828bb6e"} Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.664038 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.665614 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" event={"ID":"4e3a5503-7e56-4b44-a7e2-55909a3bbdf1","Type":"ContainerDied","Data":"819d4fe8316bec3e1e751d6b3be93ac162c8a8f1c85a89b8abeee81f096c2d9b"} Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.665727 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7vqvf" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.667867 4698 scope.go:117] "RemoveContainer" containerID="f9973e5f96cbbf6edd31572849fec410f8dff1df0ca5aba557b583170e194c9e" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.684801 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzknj\" (UniqueName: \"kubernetes.io/projected/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-kube-api-access-rzknj\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.684883 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-config\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.684905 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-client-ca\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.684921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-serving-cert\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.685001 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-config\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.685025 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g9lh\" (UniqueName: \"kubernetes.io/projected/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-kube-api-access-4g9lh\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.685037 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.685046 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.685055 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.687746 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-config\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.688824 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-client-ca\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.694003 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-serving-cert\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.730810 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzknj\" (UniqueName: \"kubernetes.io/projected/7a97aab1-ef6e-484b-9c98-fe3ca5af11b8-kube-api-access-rzknj\") pod \"route-controller-manager-85d4456ff9-cr7g6\" (UID: \"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8\") " pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.744037 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" podStartSLOduration=1.578199661 podStartE2EDuration="7.743988667s" podCreationTimestamp="2025-10-06 11:57:34 +0000 UTC" firstStartedPulling="2025-10-06 11:57:34.920274045 +0000 UTC m=+742.332966218" lastFinishedPulling="2025-10-06 11:57:41.086063041 +0000 UTC m=+748.498755224" observedRunningTime="2025-10-06 11:57:41.731297231 +0000 UTC m=+749.143989404" watchObservedRunningTime="2025-10-06 11:57:41.743988667 +0000 UTC m=+749.156680860" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.751123 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.772103 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc"] Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.781628 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9bcwc"] Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.815145 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" podStartSLOduration=4.827879545 podStartE2EDuration="8.815125108s" podCreationTimestamp="2025-10-06 11:57:33 +0000 UTC" firstStartedPulling="2025-10-06 11:57:34.54565808 +0000 UTC m=+741.958350253" lastFinishedPulling="2025-10-06 11:57:38.532903643 +0000 UTC m=+745.945595816" observedRunningTime="2025-10-06 11:57:41.814845171 +0000 UTC m=+749.227537344" watchObservedRunningTime="2025-10-06 11:57:41.815125108 +0000 UTC m=+749.227817281" Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.832281 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7vqvf"] Oct 06 11:57:41 crc kubenswrapper[4698]: I1006 11:57:41.836222 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7vqvf"] Oct 06 11:57:42 crc kubenswrapper[4698]: I1006 11:57:42.063132 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6"] Oct 06 11:57:42 crc kubenswrapper[4698]: I1006 11:57:42.676698 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" event={"ID":"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8","Type":"ContainerStarted","Data":"07eadd3deced43fbadfd83147c041fcae0b8cbd01edaecb14bb64e005b642949"} Oct 06 11:57:42 crc kubenswrapper[4698]: I1006 11:57:42.677293 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:42 crc kubenswrapper[4698]: I1006 11:57:42.677313 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" event={"ID":"7a97aab1-ef6e-484b-9c98-fe3ca5af11b8","Type":"ContainerStarted","Data":"556ca563b7a730616603768e854d18067b75d5e7fd55e245a0586a93f0130b66"} Oct 06 11:57:42 crc kubenswrapper[4698]: I1006 11:57:42.716802 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" Oct 06 11:57:42 crc kubenswrapper[4698]: I1006 11:57:42.745230 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85d4456ff9-cr7g6" podStartSLOduration=3.745203757 podStartE2EDuration="3.745203757s" podCreationTimestamp="2025-10-06 11:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:57:42.71637418 +0000 UTC m=+750.129066353" watchObservedRunningTime="2025-10-06 11:57:42.745203757 +0000 UTC m=+750.157895930" Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.341200 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3a5503-7e56-4b44-a7e2-55909a3bbdf1" path="/var/lib/kubelet/pods/4e3a5503-7e56-4b44-a7e2-55909a3bbdf1/volumes" Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.343272 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ed6e65-56c6-41a6-a8f1-18df6ad920c2" path="/var/lib/kubelet/pods/c9ed6e65-56c6-41a6-a8f1-18df6ad920c2/volumes" Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.968311 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-795865b4b4-7tv2r"] Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.969624 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.973615 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.974432 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.974590 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.974711 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.975442 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.976003 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 06 11:57:43 crc kubenswrapper[4698]: I1006 11:57:43.982868 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.045845 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-795865b4b4-7tv2r"] Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.131883 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6edc748b-8262-46fd-a37a-dfcf03792b2b-config\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.131953 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6edc748b-8262-46fd-a37a-dfcf03792b2b-client-ca\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.132029 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df8x7\" (UniqueName: \"kubernetes.io/projected/6edc748b-8262-46fd-a37a-dfcf03792b2b-kube-api-access-df8x7\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.132073 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6edc748b-8262-46fd-a37a-dfcf03792b2b-serving-cert\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.132103 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6edc748b-8262-46fd-a37a-dfcf03792b2b-proxy-ca-bundles\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.233343 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6edc748b-8262-46fd-a37a-dfcf03792b2b-serving-cert\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.233497 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6edc748b-8262-46fd-a37a-dfcf03792b2b-proxy-ca-bundles\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.235214 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6edc748b-8262-46fd-a37a-dfcf03792b2b-proxy-ca-bundles\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.235280 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6edc748b-8262-46fd-a37a-dfcf03792b2b-config\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.235342 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6edc748b-8262-46fd-a37a-dfcf03792b2b-client-ca\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.235706 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6edc748b-8262-46fd-a37a-dfcf03792b2b-config\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.236120 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6edc748b-8262-46fd-a37a-dfcf03792b2b-client-ca\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.236217 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df8x7\" (UniqueName: \"kubernetes.io/projected/6edc748b-8262-46fd-a37a-dfcf03792b2b-kube-api-access-df8x7\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.253210 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6edc748b-8262-46fd-a37a-dfcf03792b2b-serving-cert\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.268458 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df8x7\" (UniqueName: \"kubernetes.io/projected/6edc748b-8262-46fd-a37a-dfcf03792b2b-kube-api-access-df8x7\") pod \"controller-manager-795865b4b4-7tv2r\" (UID: \"6edc748b-8262-46fd-a37a-dfcf03792b2b\") " pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.293371 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.616330 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-795865b4b4-7tv2r"] Oct 06 11:57:44 crc kubenswrapper[4698]: I1006 11:57:44.706666 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" event={"ID":"6edc748b-8262-46fd-a37a-dfcf03792b2b","Type":"ContainerStarted","Data":"fa4d4d5bb89148479bda43d0c1fa2548f4cf91f916ab9dd0f2840947ae0b9de0"} Oct 06 11:57:45 crc kubenswrapper[4698]: I1006 11:57:45.718564 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" event={"ID":"6edc748b-8262-46fd-a37a-dfcf03792b2b","Type":"ContainerStarted","Data":"8d2d1e893108d0844f29a38c3aa585540964cf335623b1c6e95f349e0abc2453"} Oct 06 11:57:45 crc kubenswrapper[4698]: I1006 11:57:45.720407 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:45 crc kubenswrapper[4698]: I1006 11:57:45.726173 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" Oct 06 11:57:45 crc kubenswrapper[4698]: I1006 11:57:45.745169 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-795865b4b4-7tv2r" podStartSLOduration=6.7451450059999996 podStartE2EDuration="6.745145006s" podCreationTimestamp="2025-10-06 11:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:57:45.742054899 +0000 UTC m=+753.154747092" watchObservedRunningTime="2025-10-06 11:57:45.745145006 +0000 UTC m=+753.157837179" Oct 06 11:57:48 crc kubenswrapper[4698]: I1006 11:57:48.795983 4698 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 11:57:54 crc kubenswrapper[4698]: I1006 11:57:54.623092 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-64685f8694-khhpz" Oct 06 11:57:55 crc kubenswrapper[4698]: I1006 11:57:55.235707 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:57:55 crc kubenswrapper[4698]: I1006 11:57:55.236325 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:57:55 crc kubenswrapper[4698]: I1006 11:57:55.236630 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 11:57:55 crc kubenswrapper[4698]: I1006 11:57:55.238352 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a16d893c0f7a2a418c0d8f658e6ae120b01ba5c1a19fd9cf040618be38aa7ba"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 11:57:55 crc kubenswrapper[4698]: I1006 11:57:55.238762 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://8a16d893c0f7a2a418c0d8f658e6ae120b01ba5c1a19fd9cf040618be38aa7ba" gracePeriod=600 Oct 06 11:57:55 crc kubenswrapper[4698]: I1006 11:57:55.812707 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="8a16d893c0f7a2a418c0d8f658e6ae120b01ba5c1a19fd9cf040618be38aa7ba" exitCode=0 Oct 06 11:57:55 crc kubenswrapper[4698]: I1006 11:57:55.812763 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"8a16d893c0f7a2a418c0d8f658e6ae120b01ba5c1a19fd9cf040618be38aa7ba"} Oct 06 11:57:55 crc kubenswrapper[4698]: I1006 11:57:55.813296 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"96d00a48231f38aebcbe03f0402869c4d8faf731935340087e25c0cea08f5f67"} Oct 06 11:57:55 crc kubenswrapper[4698]: I1006 11:57:55.813327 4698 scope.go:117] "RemoveContainer" containerID="f740d4fc1c100903d5d67499ea0988ec53f44f5a5265fcfef24c778de7e4fd14" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.012426 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tztjh"] Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.015184 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.025798 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tztjh"] Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.130600 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-catalog-content\") pod \"community-operators-tztjh\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.130681 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-utilities\") pod \"community-operators-tztjh\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.131153 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b69s4\" (UniqueName: \"kubernetes.io/projected/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-kube-api-access-b69s4\") pod \"community-operators-tztjh\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.184517 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-84678b9ffd-5hzgg" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.232955 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b69s4\" (UniqueName: \"kubernetes.io/projected/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-kube-api-access-b69s4\") pod \"community-operators-tztjh\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.233057 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-catalog-content\") pod \"community-operators-tztjh\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.233719 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-catalog-content\") pod \"community-operators-tztjh\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.233940 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-utilities\") pod \"community-operators-tztjh\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.234294 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-utilities\") pod \"community-operators-tztjh\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.267304 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b69s4\" (UniqueName: \"kubernetes.io/projected/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-kube-api-access-b69s4\") pod \"community-operators-tztjh\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.335902 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.866785 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tztjh"] Oct 06 11:58:14 crc kubenswrapper[4698]: W1006 11:58:14.872486 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9a0a9d3_9bb3_478a_8ec7_c2ef7ab75806.slice/crio-3ebb4fdd0ad83491d94530ac4ee48f34d487518c7167c015f26c9da1bf510c91 WatchSource:0}: Error finding container 3ebb4fdd0ad83491d94530ac4ee48f34d487518c7167c015f26c9da1bf510c91: Status 404 returned error can't find the container with id 3ebb4fdd0ad83491d94530ac4ee48f34d487518c7167c015f26c9da1bf510c91 Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.968395 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fhhg6"] Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.971469 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.973631 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.973827 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-xsg9j" Oct 06 11:58:14 crc kubenswrapper[4698]: I1006 11:58:14.980858 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:14.998386 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z"] Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.016396 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.018873 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztjh" event={"ID":"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806","Type":"ContainerStarted","Data":"3ebb4fdd0ad83491d94530ac4ee48f34d487518c7167c015f26c9da1bf510c91"} Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.026745 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.029419 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z"] Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.048445 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bl6\" (UniqueName: \"kubernetes.io/projected/afd684b4-4275-4f0f-89d7-3e1624a04237-kube-api-access-67bl6\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.048509 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/afd684b4-4275-4f0f-89d7-3e1624a04237-frr-startup\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.048574 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-frr-sockets\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.048757 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt4ln\" (UniqueName: \"kubernetes.io/projected/90b467bf-2ac1-461a-83d5-db35ed92d625-kube-api-access-qt4ln\") pod \"frr-k8s-webhook-server-64bf5d555-vrr9z\" (UID: \"90b467bf-2ac1-461a-83d5-db35ed92d625\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.048860 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-metrics\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.048887 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b467bf-2ac1-461a-83d5-db35ed92d625-cert\") pod \"frr-k8s-webhook-server-64bf5d555-vrr9z\" (UID: \"90b467bf-2ac1-461a-83d5-db35ed92d625\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.048951 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-frr-conf\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.049055 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-reloader\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.049079 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afd684b4-4275-4f0f-89d7-3e1624a04237-metrics-certs\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.123165 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-hb6mj"] Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.124487 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.128922 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pvndt" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.128961 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.129133 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.133149 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.133287 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-m6ns6"] Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.134332 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.136300 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.139096 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-m6ns6"] Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151293 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-metrics\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151334 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b467bf-2ac1-461a-83d5-db35ed92d625-cert\") pod \"frr-k8s-webhook-server-64bf5d555-vrr9z\" (UID: \"90b467bf-2ac1-461a-83d5-db35ed92d625\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151366 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-metrics-certs\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151384 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fe4b880-4427-41e6-96d1-50cbc874aa6b-metrics-certs\") pod \"controller-68d546b9d8-m6ns6\" (UID: \"6fe4b880-4427-41e6-96d1-50cbc874aa6b\") " pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151407 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-frr-conf\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151431 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-reloader\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151448 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afd684b4-4275-4f0f-89d7-3e1624a04237-metrics-certs\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151463 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6fe4b880-4427-41e6-96d1-50cbc874aa6b-cert\") pod \"controller-68d546b9d8-m6ns6\" (UID: \"6fe4b880-4427-41e6-96d1-50cbc874aa6b\") " pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151483 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bl6\" (UniqueName: \"kubernetes.io/projected/afd684b4-4275-4f0f-89d7-3e1624a04237-kube-api-access-67bl6\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151502 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151518 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6jq8\" (UniqueName: \"kubernetes.io/projected/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-kube-api-access-v6jq8\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151548 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/afd684b4-4275-4f0f-89d7-3e1624a04237-frr-startup\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151568 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-frr-sockets\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151589 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln77j\" (UniqueName: \"kubernetes.io/projected/6fe4b880-4427-41e6-96d1-50cbc874aa6b-kube-api-access-ln77j\") pod \"controller-68d546b9d8-m6ns6\" (UID: \"6fe4b880-4427-41e6-96d1-50cbc874aa6b\") " pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151618 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-metallb-excludel2\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.151649 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt4ln\" (UniqueName: \"kubernetes.io/projected/90b467bf-2ac1-461a-83d5-db35ed92d625-kube-api-access-qt4ln\") pod \"frr-k8s-webhook-server-64bf5d555-vrr9z\" (UID: \"90b467bf-2ac1-461a-83d5-db35ed92d625\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.152314 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-metrics\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.152406 4698 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.152456 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b467bf-2ac1-461a-83d5-db35ed92d625-cert podName:90b467bf-2ac1-461a-83d5-db35ed92d625 nodeName:}" failed. No retries permitted until 2025-10-06 11:58:15.652441123 +0000 UTC m=+783.065133296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90b467bf-2ac1-461a-83d5-db35ed92d625-cert") pod "frr-k8s-webhook-server-64bf5d555-vrr9z" (UID: "90b467bf-2ac1-461a-83d5-db35ed92d625") : secret "frr-k8s-webhook-server-cert" not found Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.152865 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-frr-sockets\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.152951 4698 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.153003 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/afd684b4-4275-4f0f-89d7-3e1624a04237-frr-startup\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.153104 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afd684b4-4275-4f0f-89d7-3e1624a04237-metrics-certs podName:afd684b4-4275-4f0f-89d7-3e1624a04237 nodeName:}" failed. No retries permitted until 2025-10-06 11:58:15.653059718 +0000 UTC m=+783.065751901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/afd684b4-4275-4f0f-89d7-3e1624a04237-metrics-certs") pod "frr-k8s-fhhg6" (UID: "afd684b4-4275-4f0f-89d7-3e1624a04237") : secret "frr-k8s-certs-secret" not found Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.153203 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-reloader\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.155670 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/afd684b4-4275-4f0f-89d7-3e1624a04237-frr-conf\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.180827 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt4ln\" (UniqueName: \"kubernetes.io/projected/90b467bf-2ac1-461a-83d5-db35ed92d625-kube-api-access-qt4ln\") pod \"frr-k8s-webhook-server-64bf5d555-vrr9z\" (UID: \"90b467bf-2ac1-461a-83d5-db35ed92d625\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.185766 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bl6\" (UniqueName: \"kubernetes.io/projected/afd684b4-4275-4f0f-89d7-3e1624a04237-kube-api-access-67bl6\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.253506 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln77j\" (UniqueName: \"kubernetes.io/projected/6fe4b880-4427-41e6-96d1-50cbc874aa6b-kube-api-access-ln77j\") pod \"controller-68d546b9d8-m6ns6\" (UID: \"6fe4b880-4427-41e6-96d1-50cbc874aa6b\") " pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.253566 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-metallb-excludel2\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.253645 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-metrics-certs\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.253666 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fe4b880-4427-41e6-96d1-50cbc874aa6b-metrics-certs\") pod \"controller-68d546b9d8-m6ns6\" (UID: \"6fe4b880-4427-41e6-96d1-50cbc874aa6b\") " pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.253708 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6fe4b880-4427-41e6-96d1-50cbc874aa6b-cert\") pod \"controller-68d546b9d8-m6ns6\" (UID: \"6fe4b880-4427-41e6-96d1-50cbc874aa6b\") " pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.253732 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.253749 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6jq8\" (UniqueName: \"kubernetes.io/projected/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-kube-api-access-v6jq8\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.254069 4698 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.254150 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fe4b880-4427-41e6-96d1-50cbc874aa6b-metrics-certs podName:6fe4b880-4427-41e6-96d1-50cbc874aa6b nodeName:}" failed. No retries permitted until 2025-10-06 11:58:15.754119984 +0000 UTC m=+783.166812157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6fe4b880-4427-41e6-96d1-50cbc874aa6b-metrics-certs") pod "controller-68d546b9d8-m6ns6" (UID: "6fe4b880-4427-41e6-96d1-50cbc874aa6b") : secret "controller-certs-secret" not found Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.254279 4698 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.254332 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist podName:30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5 nodeName:}" failed. No retries permitted until 2025-10-06 11:58:15.754313569 +0000 UTC m=+783.167005742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist") pod "speaker-hb6mj" (UID: "30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5") : secret "metallb-memberlist" not found Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.254498 4698 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.254521 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-metrics-certs podName:30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5 nodeName:}" failed. No retries permitted until 2025-10-06 11:58:15.754513093 +0000 UTC m=+783.167205266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-metrics-certs") pod "speaker-hb6mj" (UID: "30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5") : secret "speaker-certs-secret" not found Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.254987 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-metallb-excludel2\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.261658 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.268382 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6fe4b880-4427-41e6-96d1-50cbc874aa6b-cert\") pod \"controller-68d546b9d8-m6ns6\" (UID: \"6fe4b880-4427-41e6-96d1-50cbc874aa6b\") " pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.285061 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln77j\" (UniqueName: \"kubernetes.io/projected/6fe4b880-4427-41e6-96d1-50cbc874aa6b-kube-api-access-ln77j\") pod \"controller-68d546b9d8-m6ns6\" (UID: \"6fe4b880-4427-41e6-96d1-50cbc874aa6b\") " pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.288528 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6jq8\" (UniqueName: \"kubernetes.io/projected/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-kube-api-access-v6jq8\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.662711 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b467bf-2ac1-461a-83d5-db35ed92d625-cert\") pod \"frr-k8s-webhook-server-64bf5d555-vrr9z\" (UID: \"90b467bf-2ac1-461a-83d5-db35ed92d625\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.662806 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afd684b4-4275-4f0f-89d7-3e1624a04237-metrics-certs\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.667561 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afd684b4-4275-4f0f-89d7-3e1624a04237-metrics-certs\") pod \"frr-k8s-fhhg6\" (UID: \"afd684b4-4275-4f0f-89d7-3e1624a04237\") " pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.667572 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b467bf-2ac1-461a-83d5-db35ed92d625-cert\") pod \"frr-k8s-webhook-server-64bf5d555-vrr9z\" (UID: \"90b467bf-2ac1-461a-83d5-db35ed92d625\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.763878 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fe4b880-4427-41e6-96d1-50cbc874aa6b-metrics-certs\") pod \"controller-68d546b9d8-m6ns6\" (UID: \"6fe4b880-4427-41e6-96d1-50cbc874aa6b\") " pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.764390 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-metrics-certs\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.764431 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.764592 4698 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 06 11:58:15 crc kubenswrapper[4698]: E1006 11:58:15.764675 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist podName:30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5 nodeName:}" failed. No retries permitted until 2025-10-06 11:58:16.764659371 +0000 UTC m=+784.177351544 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist") pod "speaker-hb6mj" (UID: "30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5") : secret "metallb-memberlist" not found Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.770440 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-metrics-certs\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.785906 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fe4b880-4427-41e6-96d1-50cbc874aa6b-metrics-certs\") pod \"controller-68d546b9d8-m6ns6\" (UID: \"6fe4b880-4427-41e6-96d1-50cbc874aa6b\") " pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.926422 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:15 crc kubenswrapper[4698]: I1006 11:58:15.954397 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:16 crc kubenswrapper[4698]: I1006 11:58:16.027059 4698 generic.go:334] "Generic (PLEG): container finished" podID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerID="aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a" exitCode=0 Oct 06 11:58:16 crc kubenswrapper[4698]: I1006 11:58:16.027109 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztjh" event={"ID":"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806","Type":"ContainerDied","Data":"aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a"} Oct 06 11:58:16 crc kubenswrapper[4698]: I1006 11:58:16.068333 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:16 crc kubenswrapper[4698]: I1006 11:58:16.468790 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z"] Oct 06 11:58:16 crc kubenswrapper[4698]: W1006 11:58:16.473617 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90b467bf_2ac1_461a_83d5_db35ed92d625.slice/crio-f6e1e976fd3dc924c89b98b80f148c7e39555a95218ade6db4777876fd64473c WatchSource:0}: Error finding container f6e1e976fd3dc924c89b98b80f148c7e39555a95218ade6db4777876fd64473c: Status 404 returned error can't find the container with id f6e1e976fd3dc924c89b98b80f148c7e39555a95218ade6db4777876fd64473c Oct 06 11:58:16 crc kubenswrapper[4698]: I1006 11:58:16.547419 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-m6ns6"] Oct 06 11:58:16 crc kubenswrapper[4698]: W1006 11:58:16.571238 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe4b880_4427_41e6_96d1_50cbc874aa6b.slice/crio-2f02094fea975e4de06f893ae47c9a76457eb7d1c3ffdd1468579a1c46c71325 WatchSource:0}: Error finding container 2f02094fea975e4de06f893ae47c9a76457eb7d1c3ffdd1468579a1c46c71325: Status 404 returned error can't find the container with id 2f02094fea975e4de06f893ae47c9a76457eb7d1c3ffdd1468579a1c46c71325 Oct 06 11:58:16 crc kubenswrapper[4698]: I1006 11:58:16.779560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:16 crc kubenswrapper[4698]: E1006 11:58:16.779826 4698 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 06 11:58:16 crc kubenswrapper[4698]: E1006 11:58:16.780308 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist podName:30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5 nodeName:}" failed. No retries permitted until 2025-10-06 11:58:18.78027984 +0000 UTC m=+786.192972013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist") pod "speaker-hb6mj" (UID: "30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5") : secret "metallb-memberlist" not found Oct 06 11:58:17 crc kubenswrapper[4698]: I1006 11:58:17.057569 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhhg6" event={"ID":"afd684b4-4275-4f0f-89d7-3e1624a04237","Type":"ContainerStarted","Data":"9b66c13b5b5bed6d185ec3a032088f81220411419eec8d96aee89764905d9f95"} Oct 06 11:58:17 crc kubenswrapper[4698]: I1006 11:58:17.059272 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztjh" event={"ID":"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806","Type":"ContainerStarted","Data":"6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec"} Oct 06 11:58:17 crc kubenswrapper[4698]: I1006 11:58:17.060410 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" event={"ID":"90b467bf-2ac1-461a-83d5-db35ed92d625","Type":"ContainerStarted","Data":"f6e1e976fd3dc924c89b98b80f148c7e39555a95218ade6db4777876fd64473c"} Oct 06 11:58:17 crc kubenswrapper[4698]: I1006 11:58:17.062026 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-m6ns6" event={"ID":"6fe4b880-4427-41e6-96d1-50cbc874aa6b","Type":"ContainerStarted","Data":"63e539d0ef658e19a8686804684994494d055bcbd8c62e3f51e586ac08660da8"} Oct 06 11:58:17 crc kubenswrapper[4698]: I1006 11:58:17.062081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-m6ns6" event={"ID":"6fe4b880-4427-41e6-96d1-50cbc874aa6b","Type":"ContainerStarted","Data":"e1b90bebbfab5ad0e1958781ae921462714135d4c04d0151e1157ea1a007fa76"} Oct 06 11:58:17 crc kubenswrapper[4698]: I1006 11:58:17.062096 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-m6ns6" event={"ID":"6fe4b880-4427-41e6-96d1-50cbc874aa6b","Type":"ContainerStarted","Data":"2f02094fea975e4de06f893ae47c9a76457eb7d1c3ffdd1468579a1c46c71325"} Oct 06 11:58:17 crc kubenswrapper[4698]: I1006 11:58:17.062350 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:17 crc kubenswrapper[4698]: I1006 11:58:17.103744 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-m6ns6" podStartSLOduration=2.10371273 podStartE2EDuration="2.10371273s" podCreationTimestamp="2025-10-06 11:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:58:17.098596703 +0000 UTC m=+784.511288886" watchObservedRunningTime="2025-10-06 11:58:17.10371273 +0000 UTC m=+784.516404903" Oct 06 11:58:18 crc kubenswrapper[4698]: I1006 11:58:18.086636 4698 generic.go:334] "Generic (PLEG): container finished" podID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerID="6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec" exitCode=0 Oct 06 11:58:18 crc kubenswrapper[4698]: I1006 11:58:18.086746 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztjh" event={"ID":"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806","Type":"ContainerDied","Data":"6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec"} Oct 06 11:58:18 crc kubenswrapper[4698]: I1006 11:58:18.813951 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:18 crc kubenswrapper[4698]: I1006 11:58:18.821188 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5-memberlist\") pod \"speaker-hb6mj\" (UID: \"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5\") " pod="metallb-system/speaker-hb6mj" Oct 06 11:58:19 crc kubenswrapper[4698]: I1006 11:58:19.062313 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hb6mj" Oct 06 11:58:19 crc kubenswrapper[4698]: I1006 11:58:19.103755 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztjh" event={"ID":"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806","Type":"ContainerStarted","Data":"ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70"} Oct 06 11:58:19 crc kubenswrapper[4698]: W1006 11:58:19.112182 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30e3aeb4_d5c7_46b7_871e_e6e54cb2bab5.slice/crio-1714ce7196f522f180069c20831bc6901a940e6e8d48ceb354e4f89d3acc0a18 WatchSource:0}: Error finding container 1714ce7196f522f180069c20831bc6901a940e6e8d48ceb354e4f89d3acc0a18: Status 404 returned error can't find the container with id 1714ce7196f522f180069c20831bc6901a940e6e8d48ceb354e4f89d3acc0a18 Oct 06 11:58:19 crc kubenswrapper[4698]: I1006 11:58:19.146142 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tztjh" podStartSLOduration=3.545451514 podStartE2EDuration="6.146117675s" podCreationTimestamp="2025-10-06 11:58:13 +0000 UTC" firstStartedPulling="2025-10-06 11:58:16.029189105 +0000 UTC m=+783.441881268" lastFinishedPulling="2025-10-06 11:58:18.629855256 +0000 UTC m=+786.042547429" observedRunningTime="2025-10-06 11:58:19.142006773 +0000 UTC m=+786.554698946" watchObservedRunningTime="2025-10-06 11:58:19.146117675 +0000 UTC m=+786.558809848" Oct 06 11:58:20 crc kubenswrapper[4698]: I1006 11:58:20.124989 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hb6mj" event={"ID":"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5","Type":"ContainerStarted","Data":"8f2c3d8de38fc38c824225786d419157e999ca81e53ec1fef22e337e2809d375"} Oct 06 11:58:20 crc kubenswrapper[4698]: I1006 11:58:20.125062 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hb6mj" event={"ID":"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5","Type":"ContainerStarted","Data":"16f2fde9f24191b29c8f86a5754d0bdc6e8f565f06b5045126645e66e5abce8b"} Oct 06 11:58:20 crc kubenswrapper[4698]: I1006 11:58:20.125074 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hb6mj" event={"ID":"30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5","Type":"ContainerStarted","Data":"1714ce7196f522f180069c20831bc6901a940e6e8d48ceb354e4f89d3acc0a18"} Oct 06 11:58:20 crc kubenswrapper[4698]: I1006 11:58:20.125226 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-hb6mj" Oct 06 11:58:20 crc kubenswrapper[4698]: I1006 11:58:20.146389 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-hb6mj" podStartSLOduration=5.146367792 podStartE2EDuration="5.146367792s" podCreationTimestamp="2025-10-06 11:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:58:20.145052219 +0000 UTC m=+787.557744402" watchObservedRunningTime="2025-10-06 11:58:20.146367792 +0000 UTC m=+787.559059965" Oct 06 11:58:24 crc kubenswrapper[4698]: I1006 11:58:24.341773 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:24 crc kubenswrapper[4698]: I1006 11:58:24.342472 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:24 crc kubenswrapper[4698]: I1006 11:58:24.407640 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:25 crc kubenswrapper[4698]: I1006 11:58:25.174697 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" event={"ID":"90b467bf-2ac1-461a-83d5-db35ed92d625","Type":"ContainerStarted","Data":"8faccd22c832da9c0536fc451567018e029a508b3e66f352a28c2e2ae219c143"} Oct 06 11:58:25 crc kubenswrapper[4698]: I1006 11:58:25.175349 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:25 crc kubenswrapper[4698]: I1006 11:58:25.180130 4698 generic.go:334] "Generic (PLEG): container finished" podID="afd684b4-4275-4f0f-89d7-3e1624a04237" containerID="969021bbbd1ee82ab409bbce059e4ab4e35b91d44f6bf8d7bceaeecdf3d3e19e" exitCode=0 Oct 06 11:58:25 crc kubenswrapper[4698]: I1006 11:58:25.180433 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhhg6" event={"ID":"afd684b4-4275-4f0f-89d7-3e1624a04237","Type":"ContainerDied","Data":"969021bbbd1ee82ab409bbce059e4ab4e35b91d44f6bf8d7bceaeecdf3d3e19e"} Oct 06 11:58:25 crc kubenswrapper[4698]: I1006 11:58:25.207091 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" podStartSLOduration=3.47611895 podStartE2EDuration="11.207002411s" podCreationTimestamp="2025-10-06 11:58:14 +0000 UTC" firstStartedPulling="2025-10-06 11:58:16.477973466 +0000 UTC m=+783.890665639" lastFinishedPulling="2025-10-06 11:58:24.208856917 +0000 UTC m=+791.621549100" observedRunningTime="2025-10-06 11:58:25.204718705 +0000 UTC m=+792.617410878" watchObservedRunningTime="2025-10-06 11:58:25.207002411 +0000 UTC m=+792.619694624" Oct 06 11:58:25 crc kubenswrapper[4698]: I1006 11:58:25.243537 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:25 crc kubenswrapper[4698]: I1006 11:58:25.342753 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tztjh"] Oct 06 11:58:26 crc kubenswrapper[4698]: I1006 11:58:26.076172 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-m6ns6" Oct 06 11:58:26 crc kubenswrapper[4698]: I1006 11:58:26.192197 4698 generic.go:334] "Generic (PLEG): container finished" podID="afd684b4-4275-4f0f-89d7-3e1624a04237" containerID="c1cfd11715122d160c19ec1ee12b30f86505b1acdcc12210bfe3752be39b220b" exitCode=0 Oct 06 11:58:26 crc kubenswrapper[4698]: I1006 11:58:26.192418 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhhg6" event={"ID":"afd684b4-4275-4f0f-89d7-3e1624a04237","Type":"ContainerDied","Data":"c1cfd11715122d160c19ec1ee12b30f86505b1acdcc12210bfe3752be39b220b"} Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.203672 4698 generic.go:334] "Generic (PLEG): container finished" podID="afd684b4-4275-4f0f-89d7-3e1624a04237" containerID="3e6ee2f104aa8f5161b7b46bacd8e3008d905ce02d8d6ae4d87cb11970a1cf2d" exitCode=0 Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.203789 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhhg6" event={"ID":"afd684b4-4275-4f0f-89d7-3e1624a04237","Type":"ContainerDied","Data":"3e6ee2f104aa8f5161b7b46bacd8e3008d905ce02d8d6ae4d87cb11970a1cf2d"} Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.204356 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tztjh" podUID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerName="registry-server" containerID="cri-o://ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70" gracePeriod=2 Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.769916 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.863161 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-utilities\") pod \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.863405 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-catalog-content\") pod \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.863496 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b69s4\" (UniqueName: \"kubernetes.io/projected/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-kube-api-access-b69s4\") pod \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\" (UID: \"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806\") " Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.864486 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-utilities" (OuterVolumeSpecName: "utilities") pod "c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" (UID: "c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.877345 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-kube-api-access-b69s4" (OuterVolumeSpecName: "kube-api-access-b69s4") pod "c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" (UID: "c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806"). InnerVolumeSpecName "kube-api-access-b69s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.965593 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b69s4\" (UniqueName: \"kubernetes.io/projected/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-kube-api-access-b69s4\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:27 crc kubenswrapper[4698]: I1006 11:58:27.965641 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.217797 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhhg6" event={"ID":"afd684b4-4275-4f0f-89d7-3e1624a04237","Type":"ContainerStarted","Data":"1ce05767449abe2ef8f65acaf2d33cdb56869ce7553314d035eaf35505a9f229"} Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.221556 4698 generic.go:334] "Generic (PLEG): container finished" podID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerID="ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70" exitCode=0 Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.221610 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztjh" event={"ID":"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806","Type":"ContainerDied","Data":"ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70"} Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.221652 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztjh" event={"ID":"c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806","Type":"ContainerDied","Data":"3ebb4fdd0ad83491d94530ac4ee48f34d487518c7167c015f26c9da1bf510c91"} Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.221673 4698 scope.go:117] "RemoveContainer" containerID="ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.221802 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tztjh" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.252249 4698 scope.go:117] "RemoveContainer" containerID="6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.277368 4698 scope.go:117] "RemoveContainer" containerID="aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.303392 4698 scope.go:117] "RemoveContainer" containerID="ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70" Oct 06 11:58:28 crc kubenswrapper[4698]: E1006 11:58:28.304227 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70\": container with ID starting with ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70 not found: ID does not exist" containerID="ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.304281 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70"} err="failed to get container status \"ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70\": rpc error: code = NotFound desc = could not find container \"ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70\": container with ID starting with ad1472be2fd6c51ce3768fadb31aac01a2e3c5c3283bc2d3bf5f02dcc6e8bd70 not found: ID does not exist" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.304321 4698 scope.go:117] "RemoveContainer" containerID="6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec" Oct 06 11:58:28 crc kubenswrapper[4698]: E1006 11:58:28.305067 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec\": container with ID starting with 6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec not found: ID does not exist" containerID="6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.305119 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec"} err="failed to get container status \"6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec\": rpc error: code = NotFound desc = could not find container \"6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec\": container with ID starting with 6d47ab8d4c91c4e071656ffa76599de4b18a93983b2a63883a03b155ff6304ec not found: ID does not exist" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.305150 4698 scope.go:117] "RemoveContainer" containerID="aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a" Oct 06 11:58:28 crc kubenswrapper[4698]: E1006 11:58:28.305570 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a\": container with ID starting with aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a not found: ID does not exist" containerID="aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.305596 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a"} err="failed to get container status \"aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a\": rpc error: code = NotFound desc = could not find container \"aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a\": container with ID starting with aa5d484d6f4e6d2688a1249cdabe89f7fb627d636a8777a7d90b5aaca25fad4a not found: ID does not exist" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.842411 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" (UID: "c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:58:28 crc kubenswrapper[4698]: I1006 11:58:28.886924 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:29 crc kubenswrapper[4698]: I1006 11:58:29.067165 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-hb6mj" Oct 06 11:58:29 crc kubenswrapper[4698]: I1006 11:58:29.153571 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tztjh"] Oct 06 11:58:29 crc kubenswrapper[4698]: I1006 11:58:29.170643 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tztjh"] Oct 06 11:58:29 crc kubenswrapper[4698]: I1006 11:58:29.232755 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhhg6" event={"ID":"afd684b4-4275-4f0f-89d7-3e1624a04237","Type":"ContainerStarted","Data":"f40921b85d4ea49d5f551e0ae997aa6a73e9e840b803c0afe2063f415c481ff6"} Oct 06 11:58:29 crc kubenswrapper[4698]: I1006 11:58:29.338314 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" path="/var/lib/kubelet/pods/c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806/volumes" Oct 06 11:58:30 crc kubenswrapper[4698]: I1006 11:58:30.252890 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhhg6" event={"ID":"afd684b4-4275-4f0f-89d7-3e1624a04237","Type":"ContainerStarted","Data":"0673a2d1bd167dc7d3c27af12884e0de725e3ca773d24883c95bde2fd0b50565"} Oct 06 11:58:30 crc kubenswrapper[4698]: I1006 11:58:30.254246 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhhg6" event={"ID":"afd684b4-4275-4f0f-89d7-3e1624a04237","Type":"ContainerStarted","Data":"a3830f3ca0e829d26f198d336c6fe430560e508bfebf2b9fc9dd4bd48e7e7f6c"} Oct 06 11:58:30 crc kubenswrapper[4698]: I1006 11:58:30.254329 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhhg6" event={"ID":"afd684b4-4275-4f0f-89d7-3e1624a04237","Type":"ContainerStarted","Data":"3050ab98df408c52eb950adb899318850d858738eef614d76da5b8c295a66890"} Oct 06 11:58:31 crc kubenswrapper[4698]: I1006 11:58:31.264900 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhhg6" event={"ID":"afd684b4-4275-4f0f-89d7-3e1624a04237","Type":"ContainerStarted","Data":"5aae28cbe604ea13942f013c8eb382a2d1ff3cead5df1939e2e0e19a3f7c86b7"} Oct 06 11:58:31 crc kubenswrapper[4698]: I1006 11:58:31.299863 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fhhg6" podStartSLOduration=9.18001674 podStartE2EDuration="17.299840193s" podCreationTimestamp="2025-10-06 11:58:14 +0000 UTC" firstStartedPulling="2025-10-06 11:58:16.122387245 +0000 UTC m=+783.535079438" lastFinishedPulling="2025-10-06 11:58:24.242210678 +0000 UTC m=+791.654902891" observedRunningTime="2025-10-06 11:58:31.297563626 +0000 UTC m=+798.710255809" watchObservedRunningTime="2025-10-06 11:58:31.299840193 +0000 UTC m=+798.712532376" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.251812 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xjsf7"] Oct 06 11:58:32 crc kubenswrapper[4698]: E1006 11:58:32.252252 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerName="registry-server" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.252284 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerName="registry-server" Oct 06 11:58:32 crc kubenswrapper[4698]: E1006 11:58:32.252307 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerName="extract-utilities" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.252318 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerName="extract-utilities" Oct 06 11:58:32 crc kubenswrapper[4698]: E1006 11:58:32.252337 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerName="extract-content" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.252349 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerName="extract-content" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.252582 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a0a9d3-9bb3-478a-8ec7-c2ef7ab75806" containerName="registry-server" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.253317 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xjsf7" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.258257 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nv9nb" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.258566 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.261606 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.274855 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.277332 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xjsf7"] Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.455292 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqthd\" (UniqueName: \"kubernetes.io/projected/008d5563-7e8d-4957-917d-a91ead5bb8c4-kube-api-access-qqthd\") pod \"openstack-operator-index-xjsf7\" (UID: \"008d5563-7e8d-4957-917d-a91ead5bb8c4\") " pod="openstack-operators/openstack-operator-index-xjsf7" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.556841 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqthd\" (UniqueName: \"kubernetes.io/projected/008d5563-7e8d-4957-917d-a91ead5bb8c4-kube-api-access-qqthd\") pod \"openstack-operator-index-xjsf7\" (UID: \"008d5563-7e8d-4957-917d-a91ead5bb8c4\") " pod="openstack-operators/openstack-operator-index-xjsf7" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.587534 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqthd\" (UniqueName: \"kubernetes.io/projected/008d5563-7e8d-4957-917d-a91ead5bb8c4-kube-api-access-qqthd\") pod \"openstack-operator-index-xjsf7\" (UID: \"008d5563-7e8d-4957-917d-a91ead5bb8c4\") " pod="openstack-operators/openstack-operator-index-xjsf7" Oct 06 11:58:32 crc kubenswrapper[4698]: I1006 11:58:32.880288 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xjsf7" Oct 06 11:58:33 crc kubenswrapper[4698]: I1006 11:58:33.388592 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xjsf7"] Oct 06 11:58:33 crc kubenswrapper[4698]: W1006 11:58:33.407501 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod008d5563_7e8d_4957_917d_a91ead5bb8c4.slice/crio-58d837013b292063686ed0d347c0c41926d48dd50eaa5756ffe8769d2cd18042 WatchSource:0}: Error finding container 58d837013b292063686ed0d347c0c41926d48dd50eaa5756ffe8769d2cd18042: Status 404 returned error can't find the container with id 58d837013b292063686ed0d347c0c41926d48dd50eaa5756ffe8769d2cd18042 Oct 06 11:58:34 crc kubenswrapper[4698]: I1006 11:58:34.309755 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xjsf7" event={"ID":"008d5563-7e8d-4957-917d-a91ead5bb8c4","Type":"ContainerStarted","Data":"58d837013b292063686ed0d347c0c41926d48dd50eaa5756ffe8769d2cd18042"} Oct 06 11:58:35 crc kubenswrapper[4698]: I1006 11:58:35.613341 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xjsf7"] Oct 06 11:58:35 crc kubenswrapper[4698]: I1006 11:58:35.927232 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:35 crc kubenswrapper[4698]: I1006 11:58:35.965587 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-vrr9z" Oct 06 11:58:35 crc kubenswrapper[4698]: I1006 11:58:35.992936 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:36 crc kubenswrapper[4698]: I1006 11:58:36.228239 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-spxxt"] Oct 06 11:58:36 crc kubenswrapper[4698]: I1006 11:58:36.229582 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-spxxt" Oct 06 11:58:36 crc kubenswrapper[4698]: I1006 11:58:36.248306 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-spxxt"] Oct 06 11:58:36 crc kubenswrapper[4698]: I1006 11:58:36.325367 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74t6h\" (UniqueName: \"kubernetes.io/projected/9b16e745-42a3-4aaf-ad06-dab67bab9ce7-kube-api-access-74t6h\") pod \"openstack-operator-index-spxxt\" (UID: \"9b16e745-42a3-4aaf-ad06-dab67bab9ce7\") " pod="openstack-operators/openstack-operator-index-spxxt" Oct 06 11:58:36 crc kubenswrapper[4698]: I1006 11:58:36.427850 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74t6h\" (UniqueName: \"kubernetes.io/projected/9b16e745-42a3-4aaf-ad06-dab67bab9ce7-kube-api-access-74t6h\") pod \"openstack-operator-index-spxxt\" (UID: \"9b16e745-42a3-4aaf-ad06-dab67bab9ce7\") " pod="openstack-operators/openstack-operator-index-spxxt" Oct 06 11:58:36 crc kubenswrapper[4698]: I1006 11:58:36.467445 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74t6h\" (UniqueName: \"kubernetes.io/projected/9b16e745-42a3-4aaf-ad06-dab67bab9ce7-kube-api-access-74t6h\") pod \"openstack-operator-index-spxxt\" (UID: \"9b16e745-42a3-4aaf-ad06-dab67bab9ce7\") " pod="openstack-operators/openstack-operator-index-spxxt" Oct 06 11:58:36 crc kubenswrapper[4698]: I1006 11:58:36.565484 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-spxxt" Oct 06 11:58:36 crc kubenswrapper[4698]: I1006 11:58:36.876710 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-spxxt"] Oct 06 11:58:36 crc kubenswrapper[4698]: W1006 11:58:36.888518 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b16e745_42a3_4aaf_ad06_dab67bab9ce7.slice/crio-61d7e1553812138e2e7f13b0afc0c3a1d428d478d226090de00b09f759f21af3 WatchSource:0}: Error finding container 61d7e1553812138e2e7f13b0afc0c3a1d428d478d226090de00b09f759f21af3: Status 404 returned error can't find the container with id 61d7e1553812138e2e7f13b0afc0c3a1d428d478d226090de00b09f759f21af3 Oct 06 11:58:37 crc kubenswrapper[4698]: I1006 11:58:37.340346 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-spxxt" event={"ID":"9b16e745-42a3-4aaf-ad06-dab67bab9ce7","Type":"ContainerStarted","Data":"61d7e1553812138e2e7f13b0afc0c3a1d428d478d226090de00b09f759f21af3"} Oct 06 11:58:41 crc kubenswrapper[4698]: I1006 11:58:41.391236 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-xjsf7" podUID="008d5563-7e8d-4957-917d-a91ead5bb8c4" containerName="registry-server" containerID="cri-o://027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018" gracePeriod=2 Oct 06 11:58:41 crc kubenswrapper[4698]: I1006 11:58:41.392198 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xjsf7" event={"ID":"008d5563-7e8d-4957-917d-a91ead5bb8c4","Type":"ContainerStarted","Data":"027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018"} Oct 06 11:58:41 crc kubenswrapper[4698]: I1006 11:58:41.394200 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-spxxt" event={"ID":"9b16e745-42a3-4aaf-ad06-dab67bab9ce7","Type":"ContainerStarted","Data":"8fa6c99237c06220e9c8766652b21eb826721cbdaa910935047f6862d95255e0"} Oct 06 11:58:41 crc kubenswrapper[4698]: I1006 11:58:41.423457 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xjsf7" podStartSLOduration=2.408544547 podStartE2EDuration="9.423427507s" podCreationTimestamp="2025-10-06 11:58:32 +0000 UTC" firstStartedPulling="2025-10-06 11:58:33.410478466 +0000 UTC m=+800.823170659" lastFinishedPulling="2025-10-06 11:58:40.425361436 +0000 UTC m=+807.838053619" observedRunningTime="2025-10-06 11:58:41.420486714 +0000 UTC m=+808.833178907" watchObservedRunningTime="2025-10-06 11:58:41.423427507 +0000 UTC m=+808.836119720" Oct 06 11:58:41 crc kubenswrapper[4698]: I1006 11:58:41.445540 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-spxxt" podStartSLOduration=1.9084239790000002 podStartE2EDuration="5.445507827s" podCreationTimestamp="2025-10-06 11:58:36 +0000 UTC" firstStartedPulling="2025-10-06 11:58:36.891247952 +0000 UTC m=+804.303940135" lastFinishedPulling="2025-10-06 11:58:40.42833178 +0000 UTC m=+807.841023983" observedRunningTime="2025-10-06 11:58:41.439906628 +0000 UTC m=+808.852598801" watchObservedRunningTime="2025-10-06 11:58:41.445507827 +0000 UTC m=+808.858199990" Oct 06 11:58:41 crc kubenswrapper[4698]: I1006 11:58:41.842995 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xjsf7" Oct 06 11:58:41 crc kubenswrapper[4698]: I1006 11:58:41.924827 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqthd\" (UniqueName: \"kubernetes.io/projected/008d5563-7e8d-4957-917d-a91ead5bb8c4-kube-api-access-qqthd\") pod \"008d5563-7e8d-4957-917d-a91ead5bb8c4\" (UID: \"008d5563-7e8d-4957-917d-a91ead5bb8c4\") " Oct 06 11:58:41 crc kubenswrapper[4698]: I1006 11:58:41.931464 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008d5563-7e8d-4957-917d-a91ead5bb8c4-kube-api-access-qqthd" (OuterVolumeSpecName: "kube-api-access-qqthd") pod "008d5563-7e8d-4957-917d-a91ead5bb8c4" (UID: "008d5563-7e8d-4957-917d-a91ead5bb8c4"). InnerVolumeSpecName "kube-api-access-qqthd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:58:42 crc kubenswrapper[4698]: I1006 11:58:42.027685 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqthd\" (UniqueName: \"kubernetes.io/projected/008d5563-7e8d-4957-917d-a91ead5bb8c4-kube-api-access-qqthd\") on node \"crc\" DevicePath \"\"" Oct 06 11:58:42 crc kubenswrapper[4698]: I1006 11:58:42.407648 4698 generic.go:334] "Generic (PLEG): container finished" podID="008d5563-7e8d-4957-917d-a91ead5bb8c4" containerID="027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018" exitCode=0 Oct 06 11:58:42 crc kubenswrapper[4698]: I1006 11:58:42.407733 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xjsf7" event={"ID":"008d5563-7e8d-4957-917d-a91ead5bb8c4","Type":"ContainerDied","Data":"027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018"} Oct 06 11:58:42 crc kubenswrapper[4698]: I1006 11:58:42.407828 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xjsf7" event={"ID":"008d5563-7e8d-4957-917d-a91ead5bb8c4","Type":"ContainerDied","Data":"58d837013b292063686ed0d347c0c41926d48dd50eaa5756ffe8769d2cd18042"} Oct 06 11:58:42 crc kubenswrapper[4698]: I1006 11:58:42.407864 4698 scope.go:117] "RemoveContainer" containerID="027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018" Oct 06 11:58:42 crc kubenswrapper[4698]: I1006 11:58:42.409008 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xjsf7" Oct 06 11:58:42 crc kubenswrapper[4698]: I1006 11:58:42.444316 4698 scope.go:117] "RemoveContainer" containerID="027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018" Oct 06 11:58:42 crc kubenswrapper[4698]: E1006 11:58:42.444968 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018\": container with ID starting with 027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018 not found: ID does not exist" containerID="027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018" Oct 06 11:58:42 crc kubenswrapper[4698]: I1006 11:58:42.445126 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018"} err="failed to get container status \"027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018\": rpc error: code = NotFound desc = could not find container \"027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018\": container with ID starting with 027a65f8f97eb5636046dd477eb9c6842036c98270c589a18301f79edc623018 not found: ID does not exist" Oct 06 11:58:42 crc kubenswrapper[4698]: I1006 11:58:42.466911 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xjsf7"] Oct 06 11:58:42 crc kubenswrapper[4698]: I1006 11:58:42.473842 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-xjsf7"] Oct 06 11:58:43 crc kubenswrapper[4698]: I1006 11:58:43.345509 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008d5563-7e8d-4957-917d-a91ead5bb8c4" path="/var/lib/kubelet/pods/008d5563-7e8d-4957-917d-a91ead5bb8c4/volumes" Oct 06 11:58:45 crc kubenswrapper[4698]: I1006 11:58:45.938750 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fhhg6" Oct 06 11:58:46 crc kubenswrapper[4698]: I1006 11:58:46.567168 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-spxxt" Oct 06 11:58:46 crc kubenswrapper[4698]: I1006 11:58:46.567232 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-spxxt" Oct 06 11:58:46 crc kubenswrapper[4698]: I1006 11:58:46.615997 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-spxxt" Oct 06 11:58:47 crc kubenswrapper[4698]: I1006 11:58:47.493658 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-spxxt" Oct 06 11:58:49 crc kubenswrapper[4698]: I1006 11:58:49.833294 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kz6hb"] Oct 06 11:58:49 crc kubenswrapper[4698]: E1006 11:58:49.836700 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008d5563-7e8d-4957-917d-a91ead5bb8c4" containerName="registry-server" Oct 06 11:58:49 crc kubenswrapper[4698]: I1006 11:58:49.836899 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="008d5563-7e8d-4957-917d-a91ead5bb8c4" containerName="registry-server" Oct 06 11:58:49 crc kubenswrapper[4698]: I1006 11:58:49.837310 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="008d5563-7e8d-4957-917d-a91ead5bb8c4" containerName="registry-server" Oct 06 11:58:49 crc kubenswrapper[4698]: I1006 11:58:49.839201 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:49 crc kubenswrapper[4698]: I1006 11:58:49.845571 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kz6hb"] Oct 06 11:58:49 crc kubenswrapper[4698]: I1006 11:58:49.963923 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-catalog-content\") pod \"redhat-marketplace-kz6hb\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:49 crc kubenswrapper[4698]: I1006 11:58:49.964413 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xscm\" (UniqueName: \"kubernetes.io/projected/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-kube-api-access-5xscm\") pod \"redhat-marketplace-kz6hb\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:49 crc kubenswrapper[4698]: I1006 11:58:49.964632 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-utilities\") pod \"redhat-marketplace-kz6hb\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:50 crc kubenswrapper[4698]: I1006 11:58:50.066689 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-catalog-content\") pod \"redhat-marketplace-kz6hb\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:50 crc kubenswrapper[4698]: I1006 11:58:50.066797 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xscm\" (UniqueName: \"kubernetes.io/projected/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-kube-api-access-5xscm\") pod \"redhat-marketplace-kz6hb\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:50 crc kubenswrapper[4698]: I1006 11:58:50.066854 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-utilities\") pod \"redhat-marketplace-kz6hb\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:50 crc kubenswrapper[4698]: I1006 11:58:50.067631 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-catalog-content\") pod \"redhat-marketplace-kz6hb\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:50 crc kubenswrapper[4698]: I1006 11:58:50.067732 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-utilities\") pod \"redhat-marketplace-kz6hb\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:50 crc kubenswrapper[4698]: I1006 11:58:50.100227 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xscm\" (UniqueName: \"kubernetes.io/projected/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-kube-api-access-5xscm\") pod \"redhat-marketplace-kz6hb\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:50 crc kubenswrapper[4698]: I1006 11:58:50.177727 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:58:50 crc kubenswrapper[4698]: I1006 11:58:50.729765 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kz6hb"] Oct 06 11:58:50 crc kubenswrapper[4698]: W1006 11:58:50.731147 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a2dc16_2d4a_44a8_8537_cc970d19d2b7.slice/crio-2d8ef8469568a9b737224e47509a981ca130c4e4d8c26ccbb24c998c5cb6d95d WatchSource:0}: Error finding container 2d8ef8469568a9b737224e47509a981ca130c4e4d8c26ccbb24c998c5cb6d95d: Status 404 returned error can't find the container with id 2d8ef8469568a9b737224e47509a981ca130c4e4d8c26ccbb24c998c5cb6d95d Oct 06 11:58:51 crc kubenswrapper[4698]: I1006 11:58:51.487965 4698 generic.go:334] "Generic (PLEG): container finished" podID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerID="e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00" exitCode=0 Oct 06 11:58:51 crc kubenswrapper[4698]: I1006 11:58:51.488067 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz6hb" event={"ID":"37a2dc16-2d4a-44a8-8537-cc970d19d2b7","Type":"ContainerDied","Data":"e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00"} Oct 06 11:58:51 crc kubenswrapper[4698]: I1006 11:58:51.488109 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz6hb" event={"ID":"37a2dc16-2d4a-44a8-8537-cc970d19d2b7","Type":"ContainerStarted","Data":"2d8ef8469568a9b737224e47509a981ca130c4e4d8c26ccbb24c998c5cb6d95d"} Oct 06 11:58:53 crc kubenswrapper[4698]: I1006 11:58:53.511084 4698 generic.go:334] "Generic (PLEG): container finished" podID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerID="09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b" exitCode=0 Oct 06 11:58:53 crc kubenswrapper[4698]: I1006 11:58:53.511160 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz6hb" event={"ID":"37a2dc16-2d4a-44a8-8537-cc970d19d2b7","Type":"ContainerDied","Data":"09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b"} Oct 06 11:58:54 crc kubenswrapper[4698]: I1006 11:58:54.522382 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz6hb" event={"ID":"37a2dc16-2d4a-44a8-8537-cc970d19d2b7","Type":"ContainerStarted","Data":"9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc"} Oct 06 11:58:54 crc kubenswrapper[4698]: I1006 11:58:54.543206 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kz6hb" podStartSLOduration=2.769191742 podStartE2EDuration="5.543181357s" podCreationTimestamp="2025-10-06 11:58:49 +0000 UTC" firstStartedPulling="2025-10-06 11:58:51.490565557 +0000 UTC m=+818.903257770" lastFinishedPulling="2025-10-06 11:58:54.264555172 +0000 UTC m=+821.677247385" observedRunningTime="2025-10-06 11:58:54.54007717 +0000 UTC m=+821.952769373" watchObservedRunningTime="2025-10-06 11:58:54.543181357 +0000 UTC m=+821.955873550" Oct 06 11:58:54 crc kubenswrapper[4698]: I1006 11:58:54.880123 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx"] Oct 06 11:58:54 crc kubenswrapper[4698]: I1006 11:58:54.882056 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:54 crc kubenswrapper[4698]: I1006 11:58:54.884283 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-pzgz6" Oct 06 11:58:54 crc kubenswrapper[4698]: I1006 11:58:54.898596 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx"] Oct 06 11:58:54 crc kubenswrapper[4698]: I1006 11:58:54.961833 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-util\") pod \"975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:54 crc kubenswrapper[4698]: I1006 11:58:54.962265 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kk65\" (UniqueName: \"kubernetes.io/projected/7c8020aa-27fb-4446-b7b0-63a79eae552a-kube-api-access-5kk65\") pod \"975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:54 crc kubenswrapper[4698]: I1006 11:58:54.962451 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-bundle\") pod \"975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:55 crc kubenswrapper[4698]: I1006 11:58:55.063468 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-util\") pod \"975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:55 crc kubenswrapper[4698]: I1006 11:58:55.063905 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kk65\" (UniqueName: \"kubernetes.io/projected/7c8020aa-27fb-4446-b7b0-63a79eae552a-kube-api-access-5kk65\") pod \"975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:55 crc kubenswrapper[4698]: I1006 11:58:55.064121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-util\") pod \"975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:55 crc kubenswrapper[4698]: I1006 11:58:55.064140 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-bundle\") pod \"975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:55 crc kubenswrapper[4698]: I1006 11:58:55.064687 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-bundle\") pod \"975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:55 crc kubenswrapper[4698]: I1006 11:58:55.092931 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kk65\" (UniqueName: \"kubernetes.io/projected/7c8020aa-27fb-4446-b7b0-63a79eae552a-kube-api-access-5kk65\") pod \"975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:55 crc kubenswrapper[4698]: I1006 11:58:55.204159 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:55 crc kubenswrapper[4698]: I1006 11:58:55.701610 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx"] Oct 06 11:58:56 crc kubenswrapper[4698]: I1006 11:58:56.541687 4698 generic.go:334] "Generic (PLEG): container finished" podID="7c8020aa-27fb-4446-b7b0-63a79eae552a" containerID="b99dedb8e89e04f0ebd452a3c2d68fee8b50d6a8f363d1962a77b635c6529314" exitCode=0 Oct 06 11:58:56 crc kubenswrapper[4698]: I1006 11:58:56.541778 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" event={"ID":"7c8020aa-27fb-4446-b7b0-63a79eae552a","Type":"ContainerDied","Data":"b99dedb8e89e04f0ebd452a3c2d68fee8b50d6a8f363d1962a77b635c6529314"} Oct 06 11:58:56 crc kubenswrapper[4698]: I1006 11:58:56.541870 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" event={"ID":"7c8020aa-27fb-4446-b7b0-63a79eae552a","Type":"ContainerStarted","Data":"0ae94c3568f50742ed43e189a2fdb5a0df3a37461e188a34c2ddd18e7b5650ec"} Oct 06 11:58:57 crc kubenswrapper[4698]: I1006 11:58:57.552907 4698 generic.go:334] "Generic (PLEG): container finished" podID="7c8020aa-27fb-4446-b7b0-63a79eae552a" containerID="ddebacfc24d0416580fbe4c4eb18db1d2966b07de22a90e8fc14c03e3b514b16" exitCode=0 Oct 06 11:58:57 crc kubenswrapper[4698]: I1006 11:58:57.553150 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" event={"ID":"7c8020aa-27fb-4446-b7b0-63a79eae552a","Type":"ContainerDied","Data":"ddebacfc24d0416580fbe4c4eb18db1d2966b07de22a90e8fc14c03e3b514b16"} Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.566743 4698 generic.go:334] "Generic (PLEG): container finished" podID="7c8020aa-27fb-4446-b7b0-63a79eae552a" containerID="8b5bd72bc93dc3d1b98bb7c77f0004f1a3e6c89aa4ab59ae19ba0cbfd4e6d185" exitCode=0 Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.566861 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" event={"ID":"7c8020aa-27fb-4446-b7b0-63a79eae552a","Type":"ContainerDied","Data":"8b5bd72bc93dc3d1b98bb7c77f0004f1a3e6c89aa4ab59ae19ba0cbfd4e6d185"} Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.820789 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wp8q9"] Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.823657 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.834961 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wp8q9"] Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.855640 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-utilities\") pod \"redhat-operators-wp8q9\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.855744 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx92v\" (UniqueName: \"kubernetes.io/projected/6047b3cb-da57-4037-820f-68c372a28e04-kube-api-access-rx92v\") pod \"redhat-operators-wp8q9\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.855800 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-catalog-content\") pod \"redhat-operators-wp8q9\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.958090 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-utilities\") pod \"redhat-operators-wp8q9\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.958211 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx92v\" (UniqueName: \"kubernetes.io/projected/6047b3cb-da57-4037-820f-68c372a28e04-kube-api-access-rx92v\") pod \"redhat-operators-wp8q9\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.958268 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-catalog-content\") pod \"redhat-operators-wp8q9\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.958997 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-utilities\") pod \"redhat-operators-wp8q9\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.959136 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-catalog-content\") pod \"redhat-operators-wp8q9\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:58 crc kubenswrapper[4698]: I1006 11:58:58.992095 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx92v\" (UniqueName: \"kubernetes.io/projected/6047b3cb-da57-4037-820f-68c372a28e04-kube-api-access-rx92v\") pod \"redhat-operators-wp8q9\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:59 crc kubenswrapper[4698]: I1006 11:58:59.152423 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:58:59 crc kubenswrapper[4698]: I1006 11:58:59.682245 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wp8q9"] Oct 06 11:58:59 crc kubenswrapper[4698]: I1006 11:58:59.890983 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:58:59 crc kubenswrapper[4698]: I1006 11:58:59.982565 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-util\") pod \"7c8020aa-27fb-4446-b7b0-63a79eae552a\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " Oct 06 11:58:59 crc kubenswrapper[4698]: I1006 11:58:59.983092 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-bundle\") pod \"7c8020aa-27fb-4446-b7b0-63a79eae552a\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " Oct 06 11:58:59 crc kubenswrapper[4698]: I1006 11:58:59.983315 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kk65\" (UniqueName: \"kubernetes.io/projected/7c8020aa-27fb-4446-b7b0-63a79eae552a-kube-api-access-5kk65\") pod \"7c8020aa-27fb-4446-b7b0-63a79eae552a\" (UID: \"7c8020aa-27fb-4446-b7b0-63a79eae552a\") " Oct 06 11:58:59 crc kubenswrapper[4698]: I1006 11:58:59.983958 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-bundle" (OuterVolumeSpecName: "bundle") pod "7c8020aa-27fb-4446-b7b0-63a79eae552a" (UID: "7c8020aa-27fb-4446-b7b0-63a79eae552a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:58:59 crc kubenswrapper[4698]: I1006 11:58:59.992093 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8020aa-27fb-4446-b7b0-63a79eae552a-kube-api-access-5kk65" (OuterVolumeSpecName: "kube-api-access-5kk65") pod "7c8020aa-27fb-4446-b7b0-63a79eae552a" (UID: "7c8020aa-27fb-4446-b7b0-63a79eae552a"). InnerVolumeSpecName "kube-api-access-5kk65". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.026125 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-util" (OuterVolumeSpecName: "util") pod "7c8020aa-27fb-4446-b7b0-63a79eae552a" (UID: "7c8020aa-27fb-4446-b7b0-63a79eae552a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.085242 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kk65\" (UniqueName: \"kubernetes.io/projected/7c8020aa-27fb-4446-b7b0-63a79eae552a-kube-api-access-5kk65\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.085567 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-util\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.085672 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c8020aa-27fb-4446-b7b0-63a79eae552a-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.177963 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.178452 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.248556 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.587084 4698 generic.go:334] "Generic (PLEG): container finished" podID="6047b3cb-da57-4037-820f-68c372a28e04" containerID="47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744" exitCode=0 Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.587224 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wp8q9" event={"ID":"6047b3cb-da57-4037-820f-68c372a28e04","Type":"ContainerDied","Data":"47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744"} Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.587279 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wp8q9" event={"ID":"6047b3cb-da57-4037-820f-68c372a28e04","Type":"ContainerStarted","Data":"43fb82ae11ecfa2469f3df88de5888b3de25b8d323170085d4438351f50b0c71"} Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.595537 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.595506 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx" event={"ID":"7c8020aa-27fb-4446-b7b0-63a79eae552a","Type":"ContainerDied","Data":"0ae94c3568f50742ed43e189a2fdb5a0df3a37461e188a34c2ddd18e7b5650ec"} Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.595747 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ae94c3568f50742ed43e189a2fdb5a0df3a37461e188a34c2ddd18e7b5650ec" Oct 06 11:59:00 crc kubenswrapper[4698]: I1006 11:59:00.665276 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:59:01 crc kubenswrapper[4698]: I1006 11:59:01.612713 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wp8q9" event={"ID":"6047b3cb-da57-4037-820f-68c372a28e04","Type":"ContainerStarted","Data":"c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87"} Oct 06 11:59:02 crc kubenswrapper[4698]: I1006 11:59:02.622937 4698 generic.go:334] "Generic (PLEG): container finished" podID="6047b3cb-da57-4037-820f-68c372a28e04" containerID="c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87" exitCode=0 Oct 06 11:59:02 crc kubenswrapper[4698]: I1006 11:59:02.623008 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wp8q9" event={"ID":"6047b3cb-da57-4037-820f-68c372a28e04","Type":"ContainerDied","Data":"c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87"} Oct 06 11:59:03 crc kubenswrapper[4698]: I1006 11:59:03.635213 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wp8q9" event={"ID":"6047b3cb-da57-4037-820f-68c372a28e04","Type":"ContainerStarted","Data":"e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e"} Oct 06 11:59:03 crc kubenswrapper[4698]: I1006 11:59:03.664883 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wp8q9" podStartSLOduration=3.215885842 podStartE2EDuration="5.664853026s" podCreationTimestamp="2025-10-06 11:58:58 +0000 UTC" firstStartedPulling="2025-10-06 11:59:00.589797868 +0000 UTC m=+828.002490081" lastFinishedPulling="2025-10-06 11:59:03.038765092 +0000 UTC m=+830.451457265" observedRunningTime="2025-10-06 11:59:03.659382819 +0000 UTC m=+831.072075062" watchObservedRunningTime="2025-10-06 11:59:03.664853026 +0000 UTC m=+831.077545209" Oct 06 11:59:03 crc kubenswrapper[4698]: I1006 11:59:03.815801 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kz6hb"] Oct 06 11:59:03 crc kubenswrapper[4698]: I1006 11:59:03.816234 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kz6hb" podUID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerName="registry-server" containerID="cri-o://9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc" gracePeriod=2 Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.265707 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.362410 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xscm\" (UniqueName: \"kubernetes.io/projected/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-kube-api-access-5xscm\") pod \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.362501 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-catalog-content\") pod \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.362713 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-utilities\") pod \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\" (UID: \"37a2dc16-2d4a-44a8-8537-cc970d19d2b7\") " Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.363970 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-utilities" (OuterVolumeSpecName: "utilities") pod "37a2dc16-2d4a-44a8-8537-cc970d19d2b7" (UID: "37a2dc16-2d4a-44a8-8537-cc970d19d2b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.375433 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-kube-api-access-5xscm" (OuterVolumeSpecName: "kube-api-access-5xscm") pod "37a2dc16-2d4a-44a8-8537-cc970d19d2b7" (UID: "37a2dc16-2d4a-44a8-8537-cc970d19d2b7"). InnerVolumeSpecName "kube-api-access-5xscm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.392449 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37a2dc16-2d4a-44a8-8537-cc970d19d2b7" (UID: "37a2dc16-2d4a-44a8-8537-cc970d19d2b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.465242 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.465317 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xscm\" (UniqueName: \"kubernetes.io/projected/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-kube-api-access-5xscm\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.465343 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a2dc16-2d4a-44a8-8537-cc970d19d2b7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.650869 4698 generic.go:334] "Generic (PLEG): container finished" podID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerID="9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc" exitCode=0 Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.650989 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kz6hb" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.651033 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz6hb" event={"ID":"37a2dc16-2d4a-44a8-8537-cc970d19d2b7","Type":"ContainerDied","Data":"9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc"} Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.651292 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz6hb" event={"ID":"37a2dc16-2d4a-44a8-8537-cc970d19d2b7","Type":"ContainerDied","Data":"2d8ef8469568a9b737224e47509a981ca130c4e4d8c26ccbb24c998c5cb6d95d"} Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.651346 4698 scope.go:117] "RemoveContainer" containerID="9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.680419 4698 scope.go:117] "RemoveContainer" containerID="09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.719002 4698 scope.go:117] "RemoveContainer" containerID="e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.727539 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kz6hb"] Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.734042 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kz6hb"] Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.755570 4698 scope.go:117] "RemoveContainer" containerID="9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc" Oct 06 11:59:04 crc kubenswrapper[4698]: E1006 11:59:04.759000 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc\": container with ID starting with 9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc not found: ID does not exist" containerID="9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.759061 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc"} err="failed to get container status \"9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc\": rpc error: code = NotFound desc = could not find container \"9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc\": container with ID starting with 9534ae13364f777ed146de224ac473c5fce1e3b03774146b1727e653701a79fc not found: ID does not exist" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.759093 4698 scope.go:117] "RemoveContainer" containerID="09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b" Oct 06 11:59:04 crc kubenswrapper[4698]: E1006 11:59:04.759549 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b\": container with ID starting with 09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b not found: ID does not exist" containerID="09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.759571 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b"} err="failed to get container status \"09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b\": rpc error: code = NotFound desc = could not find container \"09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b\": container with ID starting with 09d001176fda4b1236cd4fea2dfb240753f05b1013a8840e4108ed1007952f7b not found: ID does not exist" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.759588 4698 scope.go:117] "RemoveContainer" containerID="e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00" Oct 06 11:59:04 crc kubenswrapper[4698]: E1006 11:59:04.760060 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00\": container with ID starting with e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00 not found: ID does not exist" containerID="e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00" Oct 06 11:59:04 crc kubenswrapper[4698]: I1006 11:59:04.760089 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00"} err="failed to get container status \"e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00\": rpc error: code = NotFound desc = could not find container \"e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00\": container with ID starting with e15cedd837f131333e2d8caf00887009f5bb50b2b617e04f7e0af1e8c3b32e00 not found: ID does not exist" Oct 06 11:59:05 crc kubenswrapper[4698]: I1006 11:59:05.352079 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" path="/var/lib/kubelet/pods/37a2dc16-2d4a-44a8-8537-cc970d19d2b7/volumes" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.273467 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592"] Oct 06 11:59:06 crc kubenswrapper[4698]: E1006 11:59:06.273760 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8020aa-27fb-4446-b7b0-63a79eae552a" containerName="extract" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.273773 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8020aa-27fb-4446-b7b0-63a79eae552a" containerName="extract" Oct 06 11:59:06 crc kubenswrapper[4698]: E1006 11:59:06.273785 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerName="extract-content" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.273792 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerName="extract-content" Oct 06 11:59:06 crc kubenswrapper[4698]: E1006 11:59:06.273803 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerName="registry-server" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.273810 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerName="registry-server" Oct 06 11:59:06 crc kubenswrapper[4698]: E1006 11:59:06.273824 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8020aa-27fb-4446-b7b0-63a79eae552a" containerName="util" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.273831 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8020aa-27fb-4446-b7b0-63a79eae552a" containerName="util" Oct 06 11:59:06 crc kubenswrapper[4698]: E1006 11:59:06.273843 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerName="extract-utilities" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.273850 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerName="extract-utilities" Oct 06 11:59:06 crc kubenswrapper[4698]: E1006 11:59:06.273867 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8020aa-27fb-4446-b7b0-63a79eae552a" containerName="pull" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.273873 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8020aa-27fb-4446-b7b0-63a79eae552a" containerName="pull" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.273988 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a2dc16-2d4a-44a8-8537-cc970d19d2b7" containerName="registry-server" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.274005 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8020aa-27fb-4446-b7b0-63a79eae552a" containerName="extract" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.274758 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.281475 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-56jj2" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.294076 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jms2v\" (UniqueName: \"kubernetes.io/projected/5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae-kube-api-access-jms2v\") pod \"openstack-operator-controller-operator-5dfb49f657-vf592\" (UID: \"5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae\") " pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.305403 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592"] Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.408368 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jms2v\" (UniqueName: \"kubernetes.io/projected/5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae-kube-api-access-jms2v\") pod \"openstack-operator-controller-operator-5dfb49f657-vf592\" (UID: \"5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae\") " pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.446048 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jms2v\" (UniqueName: \"kubernetes.io/projected/5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae-kube-api-access-jms2v\") pod \"openstack-operator-controller-operator-5dfb49f657-vf592\" (UID: \"5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae\") " pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" Oct 06 11:59:06 crc kubenswrapper[4698]: I1006 11:59:06.591819 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" Oct 06 11:59:07 crc kubenswrapper[4698]: I1006 11:59:07.044184 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592"] Oct 06 11:59:07 crc kubenswrapper[4698]: W1006 11:59:07.046259 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b03b2e1_7b4b_4eca_98ab_84dfeb5c48ae.slice/crio-168c0f4ef5a47cff9d97d3221a84b8ea79c091941f5d6e9588974323f5f8a931 WatchSource:0}: Error finding container 168c0f4ef5a47cff9d97d3221a84b8ea79c091941f5d6e9588974323f5f8a931: Status 404 returned error can't find the container with id 168c0f4ef5a47cff9d97d3221a84b8ea79c091941f5d6e9588974323f5f8a931 Oct 06 11:59:07 crc kubenswrapper[4698]: I1006 11:59:07.690691 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" event={"ID":"5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae","Type":"ContainerStarted","Data":"168c0f4ef5a47cff9d97d3221a84b8ea79c091941f5d6e9588974323f5f8a931"} Oct 06 11:59:09 crc kubenswrapper[4698]: I1006 11:59:09.152989 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:59:09 crc kubenswrapper[4698]: I1006 11:59:09.153419 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:59:09 crc kubenswrapper[4698]: I1006 11:59:09.203407 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:59:09 crc kubenswrapper[4698]: I1006 11:59:09.752868 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:59:11 crc kubenswrapper[4698]: I1006 11:59:11.811409 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wp8q9"] Oct 06 11:59:11 crc kubenswrapper[4698]: I1006 11:59:11.811972 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wp8q9" podUID="6047b3cb-da57-4037-820f-68c372a28e04" containerName="registry-server" containerID="cri-o://e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e" gracePeriod=2 Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.296527 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.422784 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-utilities\") pod \"6047b3cb-da57-4037-820f-68c372a28e04\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.423057 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx92v\" (UniqueName: \"kubernetes.io/projected/6047b3cb-da57-4037-820f-68c372a28e04-kube-api-access-rx92v\") pod \"6047b3cb-da57-4037-820f-68c372a28e04\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.423199 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-catalog-content\") pod \"6047b3cb-da57-4037-820f-68c372a28e04\" (UID: \"6047b3cb-da57-4037-820f-68c372a28e04\") " Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.426356 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-utilities" (OuterVolumeSpecName: "utilities") pod "6047b3cb-da57-4037-820f-68c372a28e04" (UID: "6047b3cb-da57-4037-820f-68c372a28e04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.436389 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6047b3cb-da57-4037-820f-68c372a28e04-kube-api-access-rx92v" (OuterVolumeSpecName: "kube-api-access-rx92v") pod "6047b3cb-da57-4037-820f-68c372a28e04" (UID: "6047b3cb-da57-4037-820f-68c372a28e04"). InnerVolumeSpecName "kube-api-access-rx92v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.525551 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx92v\" (UniqueName: \"kubernetes.io/projected/6047b3cb-da57-4037-820f-68c372a28e04-kube-api-access-rx92v\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.525608 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.557862 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6047b3cb-da57-4037-820f-68c372a28e04" (UID: "6047b3cb-da57-4037-820f-68c372a28e04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.627172 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6047b3cb-da57-4037-820f-68c372a28e04-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.733741 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" event={"ID":"5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae","Type":"ContainerStarted","Data":"d7a3d41b0beaa5720f2c78cecc75427ed8ff669978b5d8f19b147c9cc8fbcc94"} Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.738858 4698 generic.go:334] "Generic (PLEG): container finished" podID="6047b3cb-da57-4037-820f-68c372a28e04" containerID="e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e" exitCode=0 Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.738930 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wp8q9" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.738929 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wp8q9" event={"ID":"6047b3cb-da57-4037-820f-68c372a28e04","Type":"ContainerDied","Data":"e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e"} Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.739381 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wp8q9" event={"ID":"6047b3cb-da57-4037-820f-68c372a28e04","Type":"ContainerDied","Data":"43fb82ae11ecfa2469f3df88de5888b3de25b8d323170085d4438351f50b0c71"} Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.739450 4698 scope.go:117] "RemoveContainer" containerID="e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.785469 4698 scope.go:117] "RemoveContainer" containerID="c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.810347 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wp8q9"] Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.821998 4698 scope.go:117] "RemoveContainer" containerID="47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.836144 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wp8q9"] Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.855718 4698 scope.go:117] "RemoveContainer" containerID="e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e" Oct 06 11:59:12 crc kubenswrapper[4698]: E1006 11:59:12.856501 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e\": container with ID starting with e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e not found: ID does not exist" containerID="e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.856546 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e"} err="failed to get container status \"e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e\": rpc error: code = NotFound desc = could not find container \"e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e\": container with ID starting with e551598ece4059cd3a1a1cbd5577a63bcb364dfb41f26b2f00734cb080d00f1e not found: ID does not exist" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.856579 4698 scope.go:117] "RemoveContainer" containerID="c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87" Oct 06 11:59:12 crc kubenswrapper[4698]: E1006 11:59:12.857092 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87\": container with ID starting with c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87 not found: ID does not exist" containerID="c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.857242 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87"} err="failed to get container status \"c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87\": rpc error: code = NotFound desc = could not find container \"c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87\": container with ID starting with c3e8167785f37fc849f351dc47ee24326395dc9ccc0674f1c2157f82c9702a87 not found: ID does not exist" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.857288 4698 scope.go:117] "RemoveContainer" containerID="47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744" Oct 06 11:59:12 crc kubenswrapper[4698]: E1006 11:59:12.858153 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744\": container with ID starting with 47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744 not found: ID does not exist" containerID="47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744" Oct 06 11:59:12 crc kubenswrapper[4698]: I1006 11:59:12.858294 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744"} err="failed to get container status \"47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744\": rpc error: code = NotFound desc = could not find container \"47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744\": container with ID starting with 47fccab71eb884f3db31ee64c1c1ab1f7eb02d4324958dcb16b7d71e046e9744 not found: ID does not exist" Oct 06 11:59:13 crc kubenswrapper[4698]: I1006 11:59:13.342106 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6047b3cb-da57-4037-820f-68c372a28e04" path="/var/lib/kubelet/pods/6047b3cb-da57-4037-820f-68c372a28e04/volumes" Oct 06 11:59:15 crc kubenswrapper[4698]: I1006 11:59:15.773107 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" event={"ID":"5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae","Type":"ContainerStarted","Data":"f9d2097ce4824698e745ea4dc1ed563b581278254afc5465568b235809f92300"} Oct 06 11:59:15 crc kubenswrapper[4698]: I1006 11:59:15.773487 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" Oct 06 11:59:15 crc kubenswrapper[4698]: I1006 11:59:15.814764 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" podStartSLOduration=1.906204592 podStartE2EDuration="9.814734296s" podCreationTimestamp="2025-10-06 11:59:06 +0000 UTC" firstStartedPulling="2025-10-06 11:59:07.049827488 +0000 UTC m=+834.462519661" lastFinishedPulling="2025-10-06 11:59:14.958357182 +0000 UTC m=+842.371049365" observedRunningTime="2025-10-06 11:59:15.812647213 +0000 UTC m=+843.225339426" watchObservedRunningTime="2025-10-06 11:59:15.814734296 +0000 UTC m=+843.227426489" Oct 06 11:59:16 crc kubenswrapper[4698]: I1006 11:59:16.596787 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5dfb49f657-vf592" Oct 06 11:59:20 crc kubenswrapper[4698]: I1006 11:59:20.862563 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qww7g"] Oct 06 11:59:20 crc kubenswrapper[4698]: E1006 11:59:20.863895 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6047b3cb-da57-4037-820f-68c372a28e04" containerName="extract-content" Oct 06 11:59:20 crc kubenswrapper[4698]: I1006 11:59:20.863927 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6047b3cb-da57-4037-820f-68c372a28e04" containerName="extract-content" Oct 06 11:59:20 crc kubenswrapper[4698]: E1006 11:59:20.863973 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6047b3cb-da57-4037-820f-68c372a28e04" containerName="registry-server" Oct 06 11:59:20 crc kubenswrapper[4698]: I1006 11:59:20.863989 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6047b3cb-da57-4037-820f-68c372a28e04" containerName="registry-server" Oct 06 11:59:20 crc kubenswrapper[4698]: E1006 11:59:20.864052 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6047b3cb-da57-4037-820f-68c372a28e04" containerName="extract-utilities" Oct 06 11:59:20 crc kubenswrapper[4698]: I1006 11:59:20.864072 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6047b3cb-da57-4037-820f-68c372a28e04" containerName="extract-utilities" Oct 06 11:59:20 crc kubenswrapper[4698]: I1006 11:59:20.864373 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6047b3cb-da57-4037-820f-68c372a28e04" containerName="registry-server" Oct 06 11:59:20 crc kubenswrapper[4698]: I1006 11:59:20.868472 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qww7g"] Oct 06 11:59:20 crc kubenswrapper[4698]: I1006 11:59:20.868670 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:20 crc kubenswrapper[4698]: I1006 11:59:20.966739 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258rz\" (UniqueName: \"kubernetes.io/projected/d6cdd550-bddb-401e-af65-3bd665e4f5e7-kube-api-access-258rz\") pod \"certified-operators-qww7g\" (UID: \"d6cdd550-bddb-401e-af65-3bd665e4f5e7\") " pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:20 crc kubenswrapper[4698]: I1006 11:59:20.966920 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cdd550-bddb-401e-af65-3bd665e4f5e7-catalog-content\") pod \"certified-operators-qww7g\" (UID: \"d6cdd550-bddb-401e-af65-3bd665e4f5e7\") " pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:20 crc kubenswrapper[4698]: I1006 11:59:20.967000 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cdd550-bddb-401e-af65-3bd665e4f5e7-utilities\") pod \"certified-operators-qww7g\" (UID: \"d6cdd550-bddb-401e-af65-3bd665e4f5e7\") " pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.068875 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-258rz\" (UniqueName: \"kubernetes.io/projected/d6cdd550-bddb-401e-af65-3bd665e4f5e7-kube-api-access-258rz\") pod \"certified-operators-qww7g\" (UID: \"d6cdd550-bddb-401e-af65-3bd665e4f5e7\") " pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.068997 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cdd550-bddb-401e-af65-3bd665e4f5e7-catalog-content\") pod \"certified-operators-qww7g\" (UID: \"d6cdd550-bddb-401e-af65-3bd665e4f5e7\") " pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.069063 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cdd550-bddb-401e-af65-3bd665e4f5e7-utilities\") pod \"certified-operators-qww7g\" (UID: \"d6cdd550-bddb-401e-af65-3bd665e4f5e7\") " pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.069736 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cdd550-bddb-401e-af65-3bd665e4f5e7-utilities\") pod \"certified-operators-qww7g\" (UID: \"d6cdd550-bddb-401e-af65-3bd665e4f5e7\") " pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.069883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cdd550-bddb-401e-af65-3bd665e4f5e7-catalog-content\") pod \"certified-operators-qww7g\" (UID: \"d6cdd550-bddb-401e-af65-3bd665e4f5e7\") " pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.097312 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-258rz\" (UniqueName: \"kubernetes.io/projected/d6cdd550-bddb-401e-af65-3bd665e4f5e7-kube-api-access-258rz\") pod \"certified-operators-qww7g\" (UID: \"d6cdd550-bddb-401e-af65-3bd665e4f5e7\") " pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.192701 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.505911 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qww7g"] Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.833892 4698 generic.go:334] "Generic (PLEG): container finished" podID="d6cdd550-bddb-401e-af65-3bd665e4f5e7" containerID="b9602ea2400056054778453d540b41640004846ad697e93d636853c48104f350" exitCode=0 Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.833995 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qww7g" event={"ID":"d6cdd550-bddb-401e-af65-3bd665e4f5e7","Type":"ContainerDied","Data":"b9602ea2400056054778453d540b41640004846ad697e93d636853c48104f350"} Oct 06 11:59:21 crc kubenswrapper[4698]: I1006 11:59:21.834283 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qww7g" event={"ID":"d6cdd550-bddb-401e-af65-3bd665e4f5e7","Type":"ContainerStarted","Data":"c8f5f782046a6ab410e4c8a96d092f0a974e012b7078e352966227c6798f2f85"} Oct 06 11:59:26 crc kubenswrapper[4698]: I1006 11:59:26.880537 4698 generic.go:334] "Generic (PLEG): container finished" podID="d6cdd550-bddb-401e-af65-3bd665e4f5e7" containerID="a29bd454fb77b08fcad6d1d83607cb48533f41bcef0cd3a108fc793d79161956" exitCode=0 Oct 06 11:59:26 crc kubenswrapper[4698]: I1006 11:59:26.880667 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qww7g" event={"ID":"d6cdd550-bddb-401e-af65-3bd665e4f5e7","Type":"ContainerDied","Data":"a29bd454fb77b08fcad6d1d83607cb48533f41bcef0cd3a108fc793d79161956"} Oct 06 11:59:27 crc kubenswrapper[4698]: I1006 11:59:27.895967 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qww7g" event={"ID":"d6cdd550-bddb-401e-af65-3bd665e4f5e7","Type":"ContainerStarted","Data":"a8c73910e34fba5652c6d75f13ead71f4edd14112c4d39c41ee7665cad453d27"} Oct 06 11:59:27 crc kubenswrapper[4698]: I1006 11:59:27.923398 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qww7g" podStartSLOduration=2.436510794 podStartE2EDuration="7.923379096s" podCreationTimestamp="2025-10-06 11:59:20 +0000 UTC" firstStartedPulling="2025-10-06 11:59:21.836516312 +0000 UTC m=+849.249208505" lastFinishedPulling="2025-10-06 11:59:27.323384604 +0000 UTC m=+854.736076807" observedRunningTime="2025-10-06 11:59:27.918786071 +0000 UTC m=+855.331478244" watchObservedRunningTime="2025-10-06 11:59:27.923379096 +0000 UTC m=+855.336071269" Oct 06 11:59:31 crc kubenswrapper[4698]: I1006 11:59:31.193205 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:31 crc kubenswrapper[4698]: I1006 11:59:31.193726 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:31 crc kubenswrapper[4698]: I1006 11:59:31.257972 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:41 crc kubenswrapper[4698]: I1006 11:59:41.242274 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qww7g" Oct 06 11:59:41 crc kubenswrapper[4698]: I1006 11:59:41.342479 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qww7g"] Oct 06 11:59:41 crc kubenswrapper[4698]: I1006 11:59:41.377066 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jkbch"] Oct 06 11:59:41 crc kubenswrapper[4698]: I1006 11:59:41.377449 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jkbch" podUID="22124369-4b3f-4da0-923e-2963f119496c" containerName="registry-server" containerID="cri-o://c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713" gracePeriod=2 Oct 06 11:59:42 crc kubenswrapper[4698]: I1006 11:59:42.978370 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.021525 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzskb\" (UniqueName: \"kubernetes.io/projected/22124369-4b3f-4da0-923e-2963f119496c-kube-api-access-qzskb\") pod \"22124369-4b3f-4da0-923e-2963f119496c\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.021650 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-utilities\") pod \"22124369-4b3f-4da0-923e-2963f119496c\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.021718 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-catalog-content\") pod \"22124369-4b3f-4da0-923e-2963f119496c\" (UID: \"22124369-4b3f-4da0-923e-2963f119496c\") " Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.022710 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-utilities" (OuterVolumeSpecName: "utilities") pod "22124369-4b3f-4da0-923e-2963f119496c" (UID: "22124369-4b3f-4da0-923e-2963f119496c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.027860 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22124369-4b3f-4da0-923e-2963f119496c-kube-api-access-qzskb" (OuterVolumeSpecName: "kube-api-access-qzskb") pod "22124369-4b3f-4da0-923e-2963f119496c" (UID: "22124369-4b3f-4da0-923e-2963f119496c"). InnerVolumeSpecName "kube-api-access-qzskb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.043116 4698 generic.go:334] "Generic (PLEG): container finished" podID="22124369-4b3f-4da0-923e-2963f119496c" containerID="c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713" exitCode=0 Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.043170 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkbch" event={"ID":"22124369-4b3f-4da0-923e-2963f119496c","Type":"ContainerDied","Data":"c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713"} Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.043204 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkbch" event={"ID":"22124369-4b3f-4da0-923e-2963f119496c","Type":"ContainerDied","Data":"2612642ae33fa79c063b96c6dd619a735cb3fc417ae70cf35cacdda1c2e3a24c"} Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.043238 4698 scope.go:117] "RemoveContainer" containerID="c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.043405 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkbch" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.066314 4698 scope.go:117] "RemoveContainer" containerID="8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.086937 4698 scope.go:117] "RemoveContainer" containerID="b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.097488 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22124369-4b3f-4da0-923e-2963f119496c" (UID: "22124369-4b3f-4da0-923e-2963f119496c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.112449 4698 scope.go:117] "RemoveContainer" containerID="c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713" Oct 06 11:59:43 crc kubenswrapper[4698]: E1006 11:59:43.113109 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713\": container with ID starting with c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713 not found: ID does not exist" containerID="c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.113172 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713"} err="failed to get container status \"c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713\": rpc error: code = NotFound desc = could not find container \"c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713\": container with ID starting with c2da23cc2da4c54473604d3c46ce7df9c9847e312f49ceba2a5ee169feb38713 not found: ID does not exist" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.113207 4698 scope.go:117] "RemoveContainer" containerID="8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d" Oct 06 11:59:43 crc kubenswrapper[4698]: E1006 11:59:43.113794 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d\": container with ID starting with 8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d not found: ID does not exist" containerID="8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.113859 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d"} err="failed to get container status \"8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d\": rpc error: code = NotFound desc = could not find container \"8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d\": container with ID starting with 8a252364b3247d5d0081170a803457e6c780676bc0fdc68ed1ddc504bb8b831d not found: ID does not exist" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.113898 4698 scope.go:117] "RemoveContainer" containerID="b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992" Oct 06 11:59:43 crc kubenswrapper[4698]: E1006 11:59:43.114614 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992\": container with ID starting with b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992 not found: ID does not exist" containerID="b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.114640 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992"} err="failed to get container status \"b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992\": rpc error: code = NotFound desc = could not find container \"b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992\": container with ID starting with b1a46c536f18662c73d984e5d0b42c5af91341e1d5da7c8092e99046ba4a7992 not found: ID does not exist" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.124148 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzskb\" (UniqueName: \"kubernetes.io/projected/22124369-4b3f-4da0-923e-2963f119496c-kube-api-access-qzskb\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.124173 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.124183 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22124369-4b3f-4da0-923e-2963f119496c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.372617 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jkbch"] Oct 06 11:59:43 crc kubenswrapper[4698]: I1006 11:59:43.386052 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jkbch"] Oct 06 11:59:45 crc kubenswrapper[4698]: I1006 11:59:45.344427 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22124369-4b3f-4da0-923e-2963f119496c" path="/var/lib/kubelet/pods/22124369-4b3f-4da0-923e-2963f119496c/volumes" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.763237 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd"] Oct 06 11:59:51 crc kubenswrapper[4698]: E1006 11:59:51.763803 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22124369-4b3f-4da0-923e-2963f119496c" containerName="registry-server" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.763816 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="22124369-4b3f-4da0-923e-2963f119496c" containerName="registry-server" Oct 06 11:59:51 crc kubenswrapper[4698]: E1006 11:59:51.763824 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22124369-4b3f-4da0-923e-2963f119496c" containerName="extract-content" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.763831 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="22124369-4b3f-4da0-923e-2963f119496c" containerName="extract-content" Oct 06 11:59:51 crc kubenswrapper[4698]: E1006 11:59:51.763842 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22124369-4b3f-4da0-923e-2963f119496c" containerName="extract-utilities" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.763850 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="22124369-4b3f-4da0-923e-2963f119496c" containerName="extract-utilities" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.763964 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="22124369-4b3f-4da0-923e-2963f119496c" containerName="registry-server" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.764644 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.768885 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-n8ghx" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.773313 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.774459 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.777326 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-g2mtn" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.796767 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.807862 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.809173 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.811490 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-gkgrm" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.812627 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.825973 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.827357 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.834352 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.841499 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-7bkqc" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.861184 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj8x9\" (UniqueName: \"kubernetes.io/projected/d2432ca3-e684-4c81-95c8-1e57826d09d6-kube-api-access-gj8x9\") pod \"barbican-operator-controller-manager-58c4cd55f4-fgmjd\" (UID: \"d2432ca3-e684-4c81-95c8-1e57826d09d6\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.866433 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.870000 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-689sr"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.876376 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.880497 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-twf5j" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.923799 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.929058 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.937614 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bfvx5" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.963257 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6r6\" (UniqueName: \"kubernetes.io/projected/0b715645-3bcb-4443-892b-e30062c78a7f-kube-api-access-lj6r6\") pod \"cinder-operator-controller-manager-7d4d4f8d-wvf75\" (UID: \"0b715645-3bcb-4443-892b-e30062c78a7f\") " pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.963506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tmm2\" (UniqueName: \"kubernetes.io/projected/b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1-kube-api-access-7tmm2\") pod \"designate-operator-controller-manager-75dfd9b554-jncqt\" (UID: \"b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.963538 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj8x9\" (UniqueName: \"kubernetes.io/projected/d2432ca3-e684-4c81-95c8-1e57826d09d6-kube-api-access-gj8x9\") pod \"barbican-operator-controller-manager-58c4cd55f4-fgmjd\" (UID: \"d2432ca3-e684-4c81-95c8-1e57826d09d6\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.963597 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2hqc\" (UniqueName: \"kubernetes.io/projected/e2d5b718-b49a-46c0-9f1d-1e536ff62301-kube-api-access-f2hqc\") pod \"glance-operator-controller-manager-5dc44df7d5-tnv74\" (UID: \"e2d5b718-b49a-46c0-9f1d-1e536ff62301\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.963645 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllgm\" (UniqueName: \"kubernetes.io/projected/110b7f13-850f-41a3-aadb-df0f5559ba1d-kube-api-access-hllgm\") pod \"heat-operator-controller-manager-54b4974c45-689sr\" (UID: \"110b7f13-850f-41a3-aadb-df0f5559ba1d\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.968394 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.980664 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-689sr"] Oct 06 11:59:51 crc kubenswrapper[4698]: I1006 11:59:51.998485 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.001743 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.008083 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ddx59" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.008294 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.011208 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj8x9\" (UniqueName: \"kubernetes.io/projected/d2432ca3-e684-4c81-95c8-1e57826d09d6-kube-api-access-gj8x9\") pod \"barbican-operator-controller-manager-58c4cd55f4-fgmjd\" (UID: \"d2432ca3-e684-4c81-95c8-1e57826d09d6\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.015708 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.035412 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.037953 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.045184 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7mnvp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.051233 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.053037 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.055505 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-2wjgh" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.058081 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.062788 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.064478 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.065960 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6r6\" (UniqueName: \"kubernetes.io/projected/0b715645-3bcb-4443-892b-e30062c78a7f-kube-api-access-lj6r6\") pod \"cinder-operator-controller-manager-7d4d4f8d-wvf75\" (UID: \"0b715645-3bcb-4443-892b-e30062c78a7f\") " pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.065991 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tmm2\" (UniqueName: \"kubernetes.io/projected/b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1-kube-api-access-7tmm2\") pod \"designate-operator-controller-manager-75dfd9b554-jncqt\" (UID: \"b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.067215 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zswq7\" (UniqueName: \"kubernetes.io/projected/92e02173-4289-4b84-b3b2-01b78d0a7205-kube-api-access-zswq7\") pod \"horizon-operator-controller-manager-76d5b87f47-gprg9\" (UID: \"92e02173-4289-4b84-b3b2-01b78d0a7205\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.067345 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2hqc\" (UniqueName: \"kubernetes.io/projected/e2d5b718-b49a-46c0-9f1d-1e536ff62301-kube-api-access-f2hqc\") pod \"glance-operator-controller-manager-5dc44df7d5-tnv74\" (UID: \"e2d5b718-b49a-46c0-9f1d-1e536ff62301\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.067382 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c7lw\" (UniqueName: \"kubernetes.io/projected/c03f5f3c-6e6c-4eba-9a1f-695c23c0d995-kube-api-access-7c7lw\") pod \"infra-operator-controller-manager-658588b8c9-p5rgb\" (UID: \"c03f5f3c-6e6c-4eba-9a1f-695c23c0d995\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.067405 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c03f5f3c-6e6c-4eba-9a1f-695c23c0d995-cert\") pod \"infra-operator-controller-manager-658588b8c9-p5rgb\" (UID: \"c03f5f3c-6e6c-4eba-9a1f-695c23c0d995\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.067548 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hllgm\" (UniqueName: \"kubernetes.io/projected/110b7f13-850f-41a3-aadb-df0f5559ba1d-kube-api-access-hllgm\") pod \"heat-operator-controller-manager-54b4974c45-689sr\" (UID: \"110b7f13-850f-41a3-aadb-df0f5559ba1d\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.077301 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jw92w" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.080157 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.088867 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.089504 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.119817 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hllgm\" (UniqueName: \"kubernetes.io/projected/110b7f13-850f-41a3-aadb-df0f5559ba1d-kube-api-access-hllgm\") pod \"heat-operator-controller-manager-54b4974c45-689sr\" (UID: \"110b7f13-850f-41a3-aadb-df0f5559ba1d\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.126212 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.128146 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.133813 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2hqc\" (UniqueName: \"kubernetes.io/projected/e2d5b718-b49a-46c0-9f1d-1e536ff62301-kube-api-access-f2hqc\") pod \"glance-operator-controller-manager-5dc44df7d5-tnv74\" (UID: \"e2d5b718-b49a-46c0-9f1d-1e536ff62301\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.136124 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tmm2\" (UniqueName: \"kubernetes.io/projected/b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1-kube-api-access-7tmm2\") pod \"designate-operator-controller-manager-75dfd9b554-jncqt\" (UID: \"b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.137077 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-zhnfh" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.137207 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6r6\" (UniqueName: \"kubernetes.io/projected/0b715645-3bcb-4443-892b-e30062c78a7f-kube-api-access-lj6r6\") pod \"cinder-operator-controller-manager-7d4d4f8d-wvf75\" (UID: \"0b715645-3bcb-4443-892b-e30062c78a7f\") " pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.139573 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.141463 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.144846 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.149641 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s4p94" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.150485 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.163861 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.165891 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.177607 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7hgfc" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.177934 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pbcd\" (UniqueName: \"kubernetes.io/projected/9543eb0d-82ab-4599-b094-8789588846af-kube-api-access-4pbcd\") pod \"manila-operator-controller-manager-65d89cfd9f-swz27\" (UID: \"9543eb0d-82ab-4599-b094-8789588846af\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.178006 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zswq7\" (UniqueName: \"kubernetes.io/projected/92e02173-4289-4b84-b3b2-01b78d0a7205-kube-api-access-zswq7\") pod \"horizon-operator-controller-manager-76d5b87f47-gprg9\" (UID: \"92e02173-4289-4b84-b3b2-01b78d0a7205\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.192228 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz7w7\" (UniqueName: \"kubernetes.io/projected/b632a477-335c-4b0e-a83e-3812409b8afa-kube-api-access-mz7w7\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-25w45\" (UID: \"b632a477-335c-4b0e-a83e-3812409b8afa\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.192385 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c7lw\" (UniqueName: \"kubernetes.io/projected/c03f5f3c-6e6c-4eba-9a1f-695c23c0d995-kube-api-access-7c7lw\") pod \"infra-operator-controller-manager-658588b8c9-p5rgb\" (UID: \"c03f5f3c-6e6c-4eba-9a1f-695c23c0d995\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.192413 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c03f5f3c-6e6c-4eba-9a1f-695c23c0d995-cert\") pod \"infra-operator-controller-manager-658588b8c9-p5rgb\" (UID: \"c03f5f3c-6e6c-4eba-9a1f-695c23c0d995\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.192571 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z5gb\" (UniqueName: \"kubernetes.io/projected/d6f6350d-b33d-4ac5-b364-c80145b4b742-kube-api-access-5z5gb\") pod \"ironic-operator-controller-manager-649675d675-4zdhn\" (UID: \"d6f6350d-b33d-4ac5-b364-c80145b4b742\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" Oct 06 11:59:52 crc kubenswrapper[4698]: E1006 11:59:52.192683 4698 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 06 11:59:52 crc kubenswrapper[4698]: E1006 11:59:52.192745 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c03f5f3c-6e6c-4eba-9a1f-695c23c0d995-cert podName:c03f5f3c-6e6c-4eba-9a1f-695c23c0d995 nodeName:}" failed. No retries permitted until 2025-10-06 11:59:52.692727712 +0000 UTC m=+880.105419885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c03f5f3c-6e6c-4eba-9a1f-695c23c0d995-cert") pod "infra-operator-controller-manager-658588b8c9-p5rgb" (UID: "c03f5f3c-6e6c-4eba-9a1f-695c23c0d995") : secret "infra-operator-webhook-server-cert" not found Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.199349 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.201648 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.203166 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.204936 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-k6x5f" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.205648 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zswq7\" (UniqueName: \"kubernetes.io/projected/92e02173-4289-4b84-b3b2-01b78d0a7205-kube-api-access-zswq7\") pod \"horizon-operator-controller-manager-76d5b87f47-gprg9\" (UID: \"92e02173-4289-4b84-b3b2-01b78d0a7205\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.210049 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.216092 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c7lw\" (UniqueName: \"kubernetes.io/projected/c03f5f3c-6e6c-4eba-9a1f-695c23c0d995-kube-api-access-7c7lw\") pod \"infra-operator-controller-manager-658588b8c9-p5rgb\" (UID: \"c03f5f3c-6e6c-4eba-9a1f-695c23c0d995\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.232631 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.257253 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.258808 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.260503 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.262217 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.262700 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-d8lxg" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.294111 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z5gb\" (UniqueName: \"kubernetes.io/projected/d6f6350d-b33d-4ac5-b364-c80145b4b742-kube-api-access-5z5gb\") pod \"ironic-operator-controller-manager-649675d675-4zdhn\" (UID: \"d6f6350d-b33d-4ac5-b364-c80145b4b742\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.294655 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p7dq\" (UniqueName: \"kubernetes.io/projected/9d910961-2283-4129-a2e0-6cec10da5779-kube-api-access-7p7dq\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-hg86j\" (UID: \"9d910961-2283-4129-a2e0-6cec10da5779\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.294731 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pbcd\" (UniqueName: \"kubernetes.io/projected/9543eb0d-82ab-4599-b094-8789588846af-kube-api-access-4pbcd\") pod \"manila-operator-controller-manager-65d89cfd9f-swz27\" (UID: \"9543eb0d-82ab-4599-b094-8789588846af\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.294792 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz7w7\" (UniqueName: \"kubernetes.io/projected/b632a477-335c-4b0e-a83e-3812409b8afa-kube-api-access-mz7w7\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-25w45\" (UID: \"b632a477-335c-4b0e-a83e-3812409b8afa\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.294836 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c68wx\" (UniqueName: \"kubernetes.io/projected/38d45acb-51da-4535-a6a8-a317360f96fd-kube-api-access-c68wx\") pod \"nova-operator-controller-manager-7c7fc454ff-rgxcq\" (UID: \"38d45acb-51da-4535-a6a8-a317360f96fd\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.294895 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj4fz\" (UniqueName: \"kubernetes.io/projected/fa0c0f93-841b-4e62-becb-32dcf40ae439-kube-api-access-lj4fz\") pod \"neutron-operator-controller-manager-8d984cc4d-w5cv6\" (UID: \"fa0c0f93-841b-4e62-becb-32dcf40ae439\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.295008 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v62k5\" (UniqueName: \"kubernetes.io/projected/56e863a6-f963-4d2f-9de6-7805ff14e90a-kube-api-access-v62k5\") pod \"octavia-operator-controller-manager-7468f855d8-j9bnp\" (UID: \"56e863a6-f963-4d2f-9de6-7805ff14e90a\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.295904 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.302174 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.323588 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.326734 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.330881 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz7w7\" (UniqueName: \"kubernetes.io/projected/b632a477-335c-4b0e-a83e-3812409b8afa-kube-api-access-mz7w7\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-25w45\" (UID: \"b632a477-335c-4b0e-a83e-3812409b8afa\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.331168 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-pww4l" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.337591 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z5gb\" (UniqueName: \"kubernetes.io/projected/d6f6350d-b33d-4ac5-b364-c80145b4b742-kube-api-access-5z5gb\") pod \"ironic-operator-controller-manager-649675d675-4zdhn\" (UID: \"d6f6350d-b33d-4ac5-b364-c80145b4b742\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.347144 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pbcd\" (UniqueName: \"kubernetes.io/projected/9543eb0d-82ab-4599-b094-8789588846af-kube-api-access-4pbcd\") pod \"manila-operator-controller-manager-65d89cfd9f-swz27\" (UID: \"9543eb0d-82ab-4599-b094-8789588846af\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.364841 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.368814 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.377872 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dshn5" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.378373 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.380378 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.396121 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp\" (UID: \"744f45cb-8563-4bf2-90f1-59f2caa1e4f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.396174 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c68wx\" (UniqueName: \"kubernetes.io/projected/38d45acb-51da-4535-a6a8-a317360f96fd-kube-api-access-c68wx\") pod \"nova-operator-controller-manager-7c7fc454ff-rgxcq\" (UID: \"38d45acb-51da-4535-a6a8-a317360f96fd\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.396210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj4fz\" (UniqueName: \"kubernetes.io/projected/fa0c0f93-841b-4e62-becb-32dcf40ae439-kube-api-access-lj4fz\") pod \"neutron-operator-controller-manager-8d984cc4d-w5cv6\" (UID: \"fa0c0f93-841b-4e62-becb-32dcf40ae439\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.396286 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v62k5\" (UniqueName: \"kubernetes.io/projected/56e863a6-f963-4d2f-9de6-7805ff14e90a-kube-api-access-v62k5\") pod \"octavia-operator-controller-manager-7468f855d8-j9bnp\" (UID: \"56e863a6-f963-4d2f-9de6-7805ff14e90a\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.396315 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p7dq\" (UniqueName: \"kubernetes.io/projected/9d910961-2283-4129-a2e0-6cec10da5779-kube-api-access-7p7dq\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-hg86j\" (UID: \"9d910961-2283-4129-a2e0-6cec10da5779\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.396350 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfg9l\" (UniqueName: \"kubernetes.io/projected/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-kube-api-access-kfg9l\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp\" (UID: \"744f45cb-8563-4bf2-90f1-59f2caa1e4f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.399057 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.405779 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.427154 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.445053 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v62k5\" (UniqueName: \"kubernetes.io/projected/56e863a6-f963-4d2f-9de6-7805ff14e90a-kube-api-access-v62k5\") pod \"octavia-operator-controller-manager-7468f855d8-j9bnp\" (UID: \"56e863a6-f963-4d2f-9de6-7805ff14e90a\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.453593 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj4fz\" (UniqueName: \"kubernetes.io/projected/fa0c0f93-841b-4e62-becb-32dcf40ae439-kube-api-access-lj4fz\") pod \"neutron-operator-controller-manager-8d984cc4d-w5cv6\" (UID: \"fa0c0f93-841b-4e62-becb-32dcf40ae439\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.459272 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-496mk"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.463260 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.466954 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4f9lx" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.468586 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c68wx\" (UniqueName: \"kubernetes.io/projected/38d45acb-51da-4535-a6a8-a317360f96fd-kube-api-access-c68wx\") pod \"nova-operator-controller-manager-7c7fc454ff-rgxcq\" (UID: \"38d45acb-51da-4535-a6a8-a317360f96fd\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.488578 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.488767 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.497924 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.507748 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvfdq\" (UniqueName: \"kubernetes.io/projected/6e97841f-b15e-4834-a445-d2a632d7021a-kube-api-access-wvfdq\") pod \"placement-operator-controller-manager-54689d9f88-vwcqq\" (UID: \"6e97841f-b15e-4834-a445-d2a632d7021a\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.507843 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfg9l\" (UniqueName: \"kubernetes.io/projected/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-kube-api-access-kfg9l\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp\" (UID: \"744f45cb-8563-4bf2-90f1-59f2caa1e4f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.507884 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89nn9\" (UniqueName: \"kubernetes.io/projected/437c5088-93d6-4331-8671-e4e537e553a7-kube-api-access-89nn9\") pod \"ovn-operator-controller-manager-6d8b6f9b9-c8sxb\" (UID: \"437c5088-93d6-4331-8671-e4e537e553a7\") " pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.507981 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp\" (UID: \"744f45cb-8563-4bf2-90f1-59f2caa1e4f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 11:59:52 crc kubenswrapper[4698]: E1006 11:59:52.512707 4698 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 11:59:52 crc kubenswrapper[4698]: E1006 11:59:52.512789 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-cert podName:744f45cb-8563-4bf2-90f1-59f2caa1e4f4 nodeName:}" failed. No retries permitted until 2025-10-06 11:59:53.012765496 +0000 UTC m=+880.425457669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-cert") pod "openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" (UID: "744f45cb-8563-4bf2-90f1-59f2caa1e4f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.515976 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.516734 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p7dq\" (UniqueName: \"kubernetes.io/projected/9d910961-2283-4129-a2e0-6cec10da5779-kube-api-access-7p7dq\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-hg86j\" (UID: \"9d910961-2283-4129-a2e0-6cec10da5779\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.517292 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-w8b4h" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.520191 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-496mk"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.557540 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.561027 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfg9l\" (UniqueName: \"kubernetes.io/projected/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-kube-api-access-kfg9l\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp\" (UID: \"744f45cb-8563-4bf2-90f1-59f2caa1e4f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.580634 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.591628 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.616522 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvfdq\" (UniqueName: \"kubernetes.io/projected/6e97841f-b15e-4834-a445-d2a632d7021a-kube-api-access-wvfdq\") pod \"placement-operator-controller-manager-54689d9f88-vwcqq\" (UID: \"6e97841f-b15e-4834-a445-d2a632d7021a\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.616570 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89nn9\" (UniqueName: \"kubernetes.io/projected/437c5088-93d6-4331-8671-e4e537e553a7-kube-api-access-89nn9\") pod \"ovn-operator-controller-manager-6d8b6f9b9-c8sxb\" (UID: \"437c5088-93d6-4331-8671-e4e537e553a7\") " pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.616598 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx8zc\" (UniqueName: \"kubernetes.io/projected/782cf4ae-9b34-46e9-9bfc-c7da6118c2dc-kube-api-access-vx8zc\") pod \"swift-operator-controller-manager-6859f9b676-496mk\" (UID: \"782cf4ae-9b34-46e9-9bfc-c7da6118c2dc\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.616634 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hbnz\" (UniqueName: \"kubernetes.io/projected/6cd7d60a-943c-42e8-9b96-74e76f1338f6-kube-api-access-4hbnz\") pod \"telemetry-operator-controller-manager-5d4d74dd89-r6ntv\" (UID: \"6cd7d60a-943c-42e8-9b96-74e76f1338f6\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.634909 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.640279 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvfdq\" (UniqueName: \"kubernetes.io/projected/6e97841f-b15e-4834-a445-d2a632d7021a-kube-api-access-wvfdq\") pod \"placement-operator-controller-manager-54689d9f88-vwcqq\" (UID: \"6e97841f-b15e-4834-a445-d2a632d7021a\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.642262 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.642409 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.646300 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89nn9\" (UniqueName: \"kubernetes.io/projected/437c5088-93d6-4331-8671-e4e537e553a7-kube-api-access-89nn9\") pod \"ovn-operator-controller-manager-6d8b6f9b9-c8sxb\" (UID: \"437c5088-93d6-4331-8671-e4e537e553a7\") " pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.648694 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.650088 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-qc7s5" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.661286 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.667578 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.686619 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.689599 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.704740 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-67gbt" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.715233 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.716943 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.719304 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx8zc\" (UniqueName: \"kubernetes.io/projected/782cf4ae-9b34-46e9-9bfc-c7da6118c2dc-kube-api-access-vx8zc\") pod \"swift-operator-controller-manager-6859f9b676-496mk\" (UID: \"782cf4ae-9b34-46e9-9bfc-c7da6118c2dc\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.719374 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hbnz\" (UniqueName: \"kubernetes.io/projected/6cd7d60a-943c-42e8-9b96-74e76f1338f6-kube-api-access-4hbnz\") pod \"telemetry-operator-controller-manager-5d4d74dd89-r6ntv\" (UID: \"6cd7d60a-943c-42e8-9b96-74e76f1338f6\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.719454 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t64r9\" (UniqueName: \"kubernetes.io/projected/6fdb9f18-6759-435a-bae6-90271f8da5b0-kube-api-access-t64r9\") pod \"test-operator-controller-manager-5cd5cb47d7-pnks4\" (UID: \"6fdb9f18-6759-435a-bae6-90271f8da5b0\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.719510 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c03f5f3c-6e6c-4eba-9a1f-695c23c0d995-cert\") pod \"infra-operator-controller-manager-658588b8c9-p5rgb\" (UID: \"c03f5f3c-6e6c-4eba-9a1f-695c23c0d995\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.719773 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.720992 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-d84lx" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.728368 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c03f5f3c-6e6c-4eba-9a1f-695c23c0d995-cert\") pod \"infra-operator-controller-manager-658588b8c9-p5rgb\" (UID: \"c03f5f3c-6e6c-4eba-9a1f-695c23c0d995\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.733828 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.742440 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx8zc\" (UniqueName: \"kubernetes.io/projected/782cf4ae-9b34-46e9-9bfc-c7da6118c2dc-kube-api-access-vx8zc\") pod \"swift-operator-controller-manager-6859f9b676-496mk\" (UID: \"782cf4ae-9b34-46e9-9bfc-c7da6118c2dc\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.743284 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hbnz\" (UniqueName: \"kubernetes.io/projected/6cd7d60a-943c-42e8-9b96-74e76f1338f6-kube-api-access-4hbnz\") pod \"telemetry-operator-controller-manager-5d4d74dd89-r6ntv\" (UID: \"6cd7d60a-943c-42e8-9b96-74e76f1338f6\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.748644 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.751087 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.763618 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.766089 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.768542 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-72gqc" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.771519 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.822335 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c1cbc98-12c2-409b-b673-0f3df8edd0fc-cert\") pod \"openstack-operator-controller-manager-767bcbdf69-tr7dh\" (UID: \"7c1cbc98-12c2-409b-b673-0f3df8edd0fc\") " pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.822423 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhpt\" (UniqueName: \"kubernetes.io/projected/33858802-bf6b-42d2-bdc6-8ec2202dd1fe-kube-api-access-nzhpt\") pod \"watcher-operator-controller-manager-99f6c4584-gxz2f\" (UID: \"33858802-bf6b-42d2-bdc6-8ec2202dd1fe\") " pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.822449 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xggq\" (UniqueName: \"kubernetes.io/projected/7c1cbc98-12c2-409b-b673-0f3df8edd0fc-kube-api-access-9xggq\") pod \"openstack-operator-controller-manager-767bcbdf69-tr7dh\" (UID: \"7c1cbc98-12c2-409b-b673-0f3df8edd0fc\") " pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.822476 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t64r9\" (UniqueName: \"kubernetes.io/projected/6fdb9f18-6759-435a-bae6-90271f8da5b0-kube-api-access-t64r9\") pod \"test-operator-controller-manager-5cd5cb47d7-pnks4\" (UID: \"6fdb9f18-6759-435a-bae6-90271f8da5b0\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.847441 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t64r9\" (UniqueName: \"kubernetes.io/projected/6fdb9f18-6759-435a-bae6-90271f8da5b0-kube-api-access-t64r9\") pod \"test-operator-controller-manager-5cd5cb47d7-pnks4\" (UID: \"6fdb9f18-6759-435a-bae6-90271f8da5b0\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.860059 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd"] Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.869109 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.919777 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.925397 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c1cbc98-12c2-409b-b673-0f3df8edd0fc-cert\") pod \"openstack-operator-controller-manager-767bcbdf69-tr7dh\" (UID: \"7c1cbc98-12c2-409b-b673-0f3df8edd0fc\") " pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.925467 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzhpt\" (UniqueName: \"kubernetes.io/projected/33858802-bf6b-42d2-bdc6-8ec2202dd1fe-kube-api-access-nzhpt\") pod \"watcher-operator-controller-manager-99f6c4584-gxz2f\" (UID: \"33858802-bf6b-42d2-bdc6-8ec2202dd1fe\") " pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.925493 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xggq\" (UniqueName: \"kubernetes.io/projected/7c1cbc98-12c2-409b-b673-0f3df8edd0fc-kube-api-access-9xggq\") pod \"openstack-operator-controller-manager-767bcbdf69-tr7dh\" (UID: \"7c1cbc98-12c2-409b-b673-0f3df8edd0fc\") " pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.925545 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l627z\" (UniqueName: \"kubernetes.io/projected/572054de-889b-43ac-abb2-8bca55810d18-kube-api-access-l627z\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-85kfz\" (UID: \"572054de-889b-43ac-abb2-8bca55810d18\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.936474 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c1cbc98-12c2-409b-b673-0f3df8edd0fc-cert\") pod \"openstack-operator-controller-manager-767bcbdf69-tr7dh\" (UID: \"7c1cbc98-12c2-409b-b673-0f3df8edd0fc\") " pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.946293 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xggq\" (UniqueName: \"kubernetes.io/projected/7c1cbc98-12c2-409b-b673-0f3df8edd0fc-kube-api-access-9xggq\") pod \"openstack-operator-controller-manager-767bcbdf69-tr7dh\" (UID: \"7c1cbc98-12c2-409b-b673-0f3df8edd0fc\") " pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.949754 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzhpt\" (UniqueName: \"kubernetes.io/projected/33858802-bf6b-42d2-bdc6-8ec2202dd1fe-kube-api-access-nzhpt\") pod \"watcher-operator-controller-manager-99f6c4584-gxz2f\" (UID: \"33858802-bf6b-42d2-bdc6-8ec2202dd1fe\") " pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.958569 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 11:59:52 crc kubenswrapper[4698]: I1006 11:59:52.999537 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74"] Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.007391 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.014622 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-689sr"] Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.026838 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp\" (UID: \"744f45cb-8563-4bf2-90f1-59f2caa1e4f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.026912 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l627z\" (UniqueName: \"kubernetes.io/projected/572054de-889b-43ac-abb2-8bca55810d18-kube-api-access-l627z\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-85kfz\" (UID: \"572054de-889b-43ac-abb2-8bca55810d18\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" Oct 06 11:59:53 crc kubenswrapper[4698]: E1006 11:59:53.030789 4698 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 11:59:53 crc kubenswrapper[4698]: E1006 11:59:53.030876 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-cert podName:744f45cb-8563-4bf2-90f1-59f2caa1e4f4 nodeName:}" failed. No retries permitted until 2025-10-06 11:59:54.03085578 +0000 UTC m=+881.443547953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-cert") pod "openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" (UID: "744f45cb-8563-4bf2-90f1-59f2caa1e4f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.034321 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.051572 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l627z\" (UniqueName: \"kubernetes.io/projected/572054de-889b-43ac-abb2-8bca55810d18-kube-api-access-l627z\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-85kfz\" (UID: \"572054de-889b-43ac-abb2-8bca55810d18\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.057894 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.070362 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.097004 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.182087 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" event={"ID":"110b7f13-850f-41a3-aadb-df0f5559ba1d","Type":"ContainerStarted","Data":"d5e03668583a285a9a7a7e02d71a9ee8b301d2b3f618447e9061c52101941ce7"} Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.183394 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" event={"ID":"d2432ca3-e684-4c81-95c8-1e57826d09d6","Type":"ContainerStarted","Data":"4eb810d5f8cbffa339a7b4e0ce41b720c023ee8d4b0e1b071f2270fe84fc88a5"} Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.190734 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" event={"ID":"e2d5b718-b49a-46c0-9f1d-1e536ff62301","Type":"ContainerStarted","Data":"8434501260fe2eae041b15cba3e18a72cc1f9b0fb5de93cd3c6bdb870ffa4891"} Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.195049 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9"] Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.270405 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45"] Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.506952 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn"] Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.508284 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j"] Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.764700 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6"] Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.794117 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt"] Oct 06 11:59:53 crc kubenswrapper[4698]: I1006 11:59:53.798483 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75"] Oct 06 11:59:53 crc kubenswrapper[4698]: W1006 11:59:53.799636 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b715645_3bcb_4443_892b_e30062c78a7f.slice/crio-0b5aff25f05fd7cdcafe16146aef22b5d0b397470652a3c5955d0e1b455ed272 WatchSource:0}: Error finding container 0b5aff25f05fd7cdcafe16146aef22b5d0b397470652a3c5955d0e1b455ed272: Status 404 returned error can't find the container with id 0b5aff25f05fd7cdcafe16146aef22b5d0b397470652a3c5955d0e1b455ed272 Oct 06 11:59:53 crc kubenswrapper[4698]: W1006 11:59:53.801415 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6a6e10a_c7c5_45a6_96fe_4fb3e60ffde1.slice/crio-d60bbbb7195f33b85878f3edcb1c6509b6ab388196afe00c8b63bc219239c2af WatchSource:0}: Error finding container d60bbbb7195f33b85878f3edcb1c6509b6ab388196afe00c8b63bc219239c2af: Status 404 returned error can't find the container with id d60bbbb7195f33b85878f3edcb1c6509b6ab388196afe00c8b63bc219239c2af Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.042165 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq"] Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.069471 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp\" (UID: \"744f45cb-8563-4bf2-90f1-59f2caa1e4f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.077432 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp"] Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.101237 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/744f45cb-8563-4bf2-90f1-59f2caa1e4f4-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp\" (UID: \"744f45cb-8563-4bf2-90f1-59f2caa1e4f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.125591 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-496mk"] Oct 06 11:59:54 crc kubenswrapper[4698]: W1006 11:59:54.133062 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod437c5088_93d6_4331_8671_e4e537e553a7.slice/crio-4759ce50adf74d7357982c2d6384746e523255d74c0dc4879ca7eefd9fbfb3fa WatchSource:0}: Error finding container 4759ce50adf74d7357982c2d6384746e523255d74c0dc4879ca7eefd9fbfb3fa: Status 404 returned error can't find the container with id 4759ce50adf74d7357982c2d6384746e523255d74c0dc4879ca7eefd9fbfb3fa Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.133097 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb"] Oct 06 11:59:54 crc kubenswrapper[4698]: W1006 11:59:54.150225 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38d45acb_51da_4535_a6a8_a317360f96fd.slice/crio-a11e12c7f97114c32cb285f404abd3ef4d861b748a72f604a157846adb0169ed WatchSource:0}: Error finding container a11e12c7f97114c32cb285f404abd3ef4d861b748a72f604a157846adb0169ed: Status 404 returned error can't find the container with id a11e12c7f97114c32cb285f404abd3ef4d861b748a72f604a157846adb0169ed Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.154609 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27"] Oct 06 11:59:54 crc kubenswrapper[4698]: W1006 11:59:54.158419 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9543eb0d_82ab_4599_b094_8789588846af.slice/crio-390cd771c1449c1d6939f8a2b1d928e33252322a81e43382e63757c93f54c498 WatchSource:0}: Error finding container 390cd771c1449c1d6939f8a2b1d928e33252322a81e43382e63757c93f54c498: Status 404 returned error can't find the container with id 390cd771c1449c1d6939f8a2b1d928e33252322a81e43382e63757c93f54c498 Oct 06 11:59:54 crc kubenswrapper[4698]: E1006 11:59:54.163953 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:063aae1458289d1090a77c74c2b978b9eb978b0e4062c399f0cb5434a8dd2757,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4pbcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-65d89cfd9f-swz27_openstack-operators(9543eb0d-82ab-4599-b094-8789588846af): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.173633 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq"] Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.193616 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.221750 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4"] Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.251111 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" event={"ID":"9543eb0d-82ab-4599-b094-8789588846af","Type":"ContainerStarted","Data":"390cd771c1449c1d6939f8a2b1d928e33252322a81e43382e63757c93f54c498"} Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.263093 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" event={"ID":"782cf4ae-9b34-46e9-9bfc-c7da6118c2dc","Type":"ContainerStarted","Data":"62abeae967f0ea123d67f8bac79567abdfc6014b2cf3a538ed7d46bafcb9b6c1"} Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.276333 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" event={"ID":"b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1","Type":"ContainerStarted","Data":"d60bbbb7195f33b85878f3edcb1c6509b6ab388196afe00c8b63bc219239c2af"} Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.283888 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f"] Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.286557 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" event={"ID":"6e97841f-b15e-4834-a445-d2a632d7021a","Type":"ContainerStarted","Data":"e8e0ca3915237f08723e29f2dedee3152c25b564190a177987591770005c7beb"} Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.310854 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz"] Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.313291 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh"] Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.318728 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb"] Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.324575 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" event={"ID":"b632a477-335c-4b0e-a83e-3812409b8afa","Type":"ContainerStarted","Data":"e5b0834c3d02d445d325faae1297fa5a69c06fb9adb47750f6f87809f567bf1b"} Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.331803 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" event={"ID":"92e02173-4289-4b84-b3b2-01b78d0a7205","Type":"ContainerStarted","Data":"8489cb451c9bf019ea9c64daea1bc2802f484133c69eaf6e1f67f076879a85a8"} Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.334038 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv"] Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.364409 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" event={"ID":"56e863a6-f963-4d2f-9de6-7805ff14e90a","Type":"ContainerStarted","Data":"87ddc770e84fdef4ccb3f18035ce3c81a32aeaaa74eb72bc947a33b726663ade"} Oct 06 11:59:54 crc kubenswrapper[4698]: E1006 11:59:54.367173 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.75:5001/openstack-k8s-operators/watcher-operator:45a54c30f8614d542491b9f49d81622c14cc615b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nzhpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-99f6c4584-gxz2f_openstack-operators(33858802-bf6b-42d2-bdc6-8ec2202dd1fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.368362 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" event={"ID":"9d910961-2283-4129-a2e0-6cec10da5779","Type":"ContainerStarted","Data":"54b8db998bc64c07782473dffd2530a9ecbaf8b43e1f7e9cc958ba179b5af320"} Oct 06 11:59:54 crc kubenswrapper[4698]: E1006 11:59:54.369033 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l627z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-85kfz_openstack-operators(572054de-889b-43ac-abb2-8bca55810d18): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 11:59:54 crc kubenswrapper[4698]: E1006 11:59:54.370502 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" podUID="572054de-889b-43ac-abb2-8bca55810d18" Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.371828 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" event={"ID":"d6f6350d-b33d-4ac5-b364-c80145b4b742","Type":"ContainerStarted","Data":"b40b250adb55695ebbbcc22ce22ac7d93acf4e2611eafe94776a594121cbc8d0"} Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.377630 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" event={"ID":"437c5088-93d6-4331-8671-e4e537e553a7","Type":"ContainerStarted","Data":"4759ce50adf74d7357982c2d6384746e523255d74c0dc4879ca7eefd9fbfb3fa"} Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.381682 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" event={"ID":"fa0c0f93-841b-4e62-becb-32dcf40ae439","Type":"ContainerStarted","Data":"498870f9643b6552bc9f3236e76309d5cea6db8151f041d1c94888bb1acdf6f2"} Oct 06 11:59:54 crc kubenswrapper[4698]: W1006 11:59:54.382454 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc03f5f3c_6e6c_4eba_9a1f_695c23c0d995.slice/crio-6b6ea5da8d009ed6251c8ea255b32020ebe067d9777f47b12f17a54742518fb6 WatchSource:0}: Error finding container 6b6ea5da8d009ed6251c8ea255b32020ebe067d9777f47b12f17a54742518fb6: Status 404 returned error can't find the container with id 6b6ea5da8d009ed6251c8ea255b32020ebe067d9777f47b12f17a54742518fb6 Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.383617 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" event={"ID":"38d45acb-51da-4535-a6a8-a317360f96fd","Type":"ContainerStarted","Data":"a11e12c7f97114c32cb285f404abd3ef4d861b748a72f604a157846adb0169ed"} Oct 06 11:59:54 crc kubenswrapper[4698]: E1006 11:59:54.383712 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7c7lw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-658588b8c9-p5rgb_openstack-operators(c03f5f3c-6e6c-4eba-9a1f-695c23c0d995): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.387150 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" event={"ID":"0b715645-3bcb-4443-892b-e30062c78a7f","Type":"ContainerStarted","Data":"0b5aff25f05fd7cdcafe16146aef22b5d0b397470652a3c5955d0e1b455ed272"} Oct 06 11:59:54 crc kubenswrapper[4698]: W1006 11:59:54.391498 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cd7d60a_943c_42e8_9b96_74e76f1338f6.slice/crio-a6935aa1eb702fb5b4e7532b2f5f2134dce72c113e824a325fab31200825fb01 WatchSource:0}: Error finding container a6935aa1eb702fb5b4e7532b2f5f2134dce72c113e824a325fab31200825fb01: Status 404 returned error can't find the container with id a6935aa1eb702fb5b4e7532b2f5f2134dce72c113e824a325fab31200825fb01 Oct 06 11:59:54 crc kubenswrapper[4698]: E1006 11:59:54.410484 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hbnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5d4d74dd89-r6ntv_openstack-operators(6cd7d60a-943c-42e8-9b96-74e76f1338f6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 11:59:54 crc kubenswrapper[4698]: E1006 11:59:54.448848 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" podUID="9543eb0d-82ab-4599-b094-8789588846af" Oct 06 11:59:54 crc kubenswrapper[4698]: E1006 11:59:54.598538 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" podUID="33858802-bf6b-42d2-bdc6-8ec2202dd1fe" Oct 06 11:59:54 crc kubenswrapper[4698]: E1006 11:59:54.631235 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" podUID="6cd7d60a-943c-42e8-9b96-74e76f1338f6" Oct 06 11:59:54 crc kubenswrapper[4698]: E1006 11:59:54.800505 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" podUID="c03f5f3c-6e6c-4eba-9a1f-695c23c0d995" Oct 06 11:59:54 crc kubenswrapper[4698]: I1006 11:59:54.945436 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp"] Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.234865 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.235264 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.438174 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" event={"ID":"6fdb9f18-6759-435a-bae6-90271f8da5b0","Type":"ContainerStarted","Data":"f65c26f3d71c80c4a313b21f3a33e043c8d4ef3a632c394e9ab9bb4ee132bb17"} Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.451122 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" event={"ID":"572054de-889b-43ac-abb2-8bca55810d18","Type":"ContainerStarted","Data":"3afa09d7d46a00454136294c976a717bb5b577b6ec0b8fd6d3834e2ee46d5156"} Oct 06 11:59:55 crc kubenswrapper[4698]: E1006 11:59:55.459582 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" podUID="572054de-889b-43ac-abb2-8bca55810d18" Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.468109 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" event={"ID":"9543eb0d-82ab-4599-b094-8789588846af","Type":"ContainerStarted","Data":"b85f1a2afd68643a2d54dd9483f1fe806aaaa72641b897cbbf6d72aa7b9f3dce"} Oct 06 11:59:55 crc kubenswrapper[4698]: E1006 11:59:55.470305 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:063aae1458289d1090a77c74c2b978b9eb978b0e4062c399f0cb5434a8dd2757\\\"\"" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" podUID="9543eb0d-82ab-4599-b094-8789588846af" Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.473092 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" event={"ID":"7c1cbc98-12c2-409b-b673-0f3df8edd0fc","Type":"ContainerStarted","Data":"f4b5084298b4619db5e1841442012264ab395624f0280e26c48cab845c0f8af9"} Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.473173 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" event={"ID":"7c1cbc98-12c2-409b-b673-0f3df8edd0fc","Type":"ContainerStarted","Data":"4e3497d431933412aae2e86e4b436565d640a9b91bcdde91da170d7a36bf3de9"} Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.473200 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.473212 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" event={"ID":"7c1cbc98-12c2-409b-b673-0f3df8edd0fc","Type":"ContainerStarted","Data":"1d8b0f9cc0bbea0e21b17114a669108b0299537e1e98d19181951e4fc36800bd"} Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.474601 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" event={"ID":"c03f5f3c-6e6c-4eba-9a1f-695c23c0d995","Type":"ContainerStarted","Data":"24b5acd56554bd4b0a3dee26d453d69cb0d4942857495bed32a123444e8e12f6"} Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.474631 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" event={"ID":"c03f5f3c-6e6c-4eba-9a1f-695c23c0d995","Type":"ContainerStarted","Data":"6b6ea5da8d009ed6251c8ea255b32020ebe067d9777f47b12f17a54742518fb6"} Oct 06 11:59:55 crc kubenswrapper[4698]: E1006 11:59:55.475930 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" podUID="c03f5f3c-6e6c-4eba-9a1f-695c23c0d995" Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.480524 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" event={"ID":"6cd7d60a-943c-42e8-9b96-74e76f1338f6","Type":"ContainerStarted","Data":"b58d6ee0a7025b103f61d62523f226f91c5c7158a1f4d5e9d71dde194ffdeec6"} Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.480586 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" event={"ID":"6cd7d60a-943c-42e8-9b96-74e76f1338f6","Type":"ContainerStarted","Data":"a6935aa1eb702fb5b4e7532b2f5f2134dce72c113e824a325fab31200825fb01"} Oct 06 11:59:55 crc kubenswrapper[4698]: E1006 11:59:55.486066 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" podUID="6cd7d60a-943c-42e8-9b96-74e76f1338f6" Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.493718 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" event={"ID":"744f45cb-8563-4bf2-90f1-59f2caa1e4f4","Type":"ContainerStarted","Data":"9815fce308d543e5442a6f2aabc87c04a331136bd1b911afb916fcdb24b573a3"} Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.499546 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" event={"ID":"33858802-bf6b-42d2-bdc6-8ec2202dd1fe","Type":"ContainerStarted","Data":"ebf0597ef8846a66bf9fabd8da06585a2f59ae843453d021e68b7aed06f15072"} Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.499572 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" event={"ID":"33858802-bf6b-42d2-bdc6-8ec2202dd1fe","Type":"ContainerStarted","Data":"f62deb3a260d3e2a4e6b3449c2534dc6dff37285ed652b870cdca818292922fa"} Oct 06 11:59:55 crc kubenswrapper[4698]: E1006 11:59:55.512388 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.75:5001/openstack-k8s-operators/watcher-operator:45a54c30f8614d542491b9f49d81622c14cc615b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" podUID="33858802-bf6b-42d2-bdc6-8ec2202dd1fe" Oct 06 11:59:55 crc kubenswrapper[4698]: I1006 11:59:55.514923 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" podStartSLOduration=3.514902515 podStartE2EDuration="3.514902515s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 11:59:55.512816513 +0000 UTC m=+882.925508686" watchObservedRunningTime="2025-10-06 11:59:55.514902515 +0000 UTC m=+882.927594678" Oct 06 11:59:56 crc kubenswrapper[4698]: E1006 11:59:56.580912 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:063aae1458289d1090a77c74c2b978b9eb978b0e4062c399f0cb5434a8dd2757\\\"\"" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" podUID="9543eb0d-82ab-4599-b094-8789588846af" Oct 06 11:59:56 crc kubenswrapper[4698]: E1006 11:59:56.580783 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.75:5001/openstack-k8s-operators/watcher-operator:45a54c30f8614d542491b9f49d81622c14cc615b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" podUID="33858802-bf6b-42d2-bdc6-8ec2202dd1fe" Oct 06 11:59:56 crc kubenswrapper[4698]: E1006 11:59:56.582798 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" podUID="6cd7d60a-943c-42e8-9b96-74e76f1338f6" Oct 06 11:59:56 crc kubenswrapper[4698]: E1006 11:59:56.582900 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" podUID="c03f5f3c-6e6c-4eba-9a1f-695c23c0d995" Oct 06 11:59:56 crc kubenswrapper[4698]: E1006 11:59:56.583990 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" podUID="572054de-889b-43ac-abb2-8bca55810d18" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.168705 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk"] Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.173745 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.177537 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.178895 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.203050 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk"] Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.287991 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-config-volume\") pod \"collect-profiles-29329200-v4hpk\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.288068 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-secret-volume\") pod \"collect-profiles-29329200-v4hpk\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.288298 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsdp7\" (UniqueName: \"kubernetes.io/projected/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-kube-api-access-gsdp7\") pod \"collect-profiles-29329200-v4hpk\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.390287 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-config-volume\") pod \"collect-profiles-29329200-v4hpk\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.390363 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-secret-volume\") pod \"collect-profiles-29329200-v4hpk\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.390445 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsdp7\" (UniqueName: \"kubernetes.io/projected/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-kube-api-access-gsdp7\") pod \"collect-profiles-29329200-v4hpk\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.394415 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-config-volume\") pod \"collect-profiles-29329200-v4hpk\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.414864 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-secret-volume\") pod \"collect-profiles-29329200-v4hpk\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.423441 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsdp7\" (UniqueName: \"kubernetes.io/projected/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-kube-api-access-gsdp7\") pod \"collect-profiles-29329200-v4hpk\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:00 crc kubenswrapper[4698]: I1006 12:00:00.515812 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:03 crc kubenswrapper[4698]: I1006 12:00:03.080692 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-767bcbdf69-tr7dh" Oct 06 12:00:07 crc kubenswrapper[4698]: E1006 12:00:07.054476 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842" Oct 06 12:00:07 crc kubenswrapper[4698]: E1006 12:00:07.055213 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c68wx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7c7fc454ff-rgxcq_openstack-operators(38d45acb-51da-4535-a6a8-a317360f96fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:00:07 crc kubenswrapper[4698]: E1006 12:00:07.578738 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:5f96b563a63494082323bfced089d6589e0c89db43c6a39a2e912c79b1a278fe" Oct 06 12:00:07 crc kubenswrapper[4698]: E1006 12:00:07.579004 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:5f96b563a63494082323bfced089d6589e0c89db43c6a39a2e912c79b1a278fe,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7p7dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6cd6d7bdf5-hg86j_openstack-operators(9d910961-2283-4129-a2e0-6cec10da5779): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:00:08 crc kubenswrapper[4698]: E1006 12:00:08.108383 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799" Oct 06 12:00:08 crc kubenswrapper[4698]: E1006 12:00:08.109507 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfg9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp_openstack-operators(744f45cb-8563-4bf2-90f1-59f2caa1e4f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:00:08 crc kubenswrapper[4698]: E1006 12:00:08.479980 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" podUID="744f45cb-8563-4bf2-90f1-59f2caa1e4f4" Oct 06 12:00:08 crc kubenswrapper[4698]: E1006 12:00:08.485595 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" podUID="38d45acb-51da-4535-a6a8-a317360f96fd" Oct 06 12:00:08 crc kubenswrapper[4698]: E1006 12:00:08.494244 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" podUID="9d910961-2283-4129-a2e0-6cec10da5779" Oct 06 12:00:08 crc kubenswrapper[4698]: I1006 12:00:08.682514 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk"] Oct 06 12:00:08 crc kubenswrapper[4698]: I1006 12:00:08.725129 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" event={"ID":"9d910961-2283-4129-a2e0-6cec10da5779","Type":"ContainerStarted","Data":"e37161ecc45fea66ccfc84b73b2539c4cd350005519f0268c4be5f8cf56deb70"} Oct 06 12:00:08 crc kubenswrapper[4698]: E1006 12:00:08.729506 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:5f96b563a63494082323bfced089d6589e0c89db43c6a39a2e912c79b1a278fe\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" podUID="9d910961-2283-4129-a2e0-6cec10da5779" Oct 06 12:00:08 crc kubenswrapper[4698]: I1006 12:00:08.729740 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" event={"ID":"38d45acb-51da-4535-a6a8-a317360f96fd","Type":"ContainerStarted","Data":"1a211437da3e67074b3c19c45860d624f8f57c19d288b5e1cba431ea6f237a17"} Oct 06 12:00:08 crc kubenswrapper[4698]: E1006 12:00:08.736259 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" podUID="38d45acb-51da-4535-a6a8-a317360f96fd" Oct 06 12:00:08 crc kubenswrapper[4698]: I1006 12:00:08.737000 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" event={"ID":"56e863a6-f963-4d2f-9de6-7805ff14e90a","Type":"ContainerStarted","Data":"fc74d451ca3f7a5daf6ba37b585315bb23810393116c55ea85a07906e777fc48"} Oct 06 12:00:08 crc kubenswrapper[4698]: I1006 12:00:08.743135 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" event={"ID":"744f45cb-8563-4bf2-90f1-59f2caa1e4f4","Type":"ContainerStarted","Data":"b7bf9db490337f3b24478218b63c446e672764da1dcda3133dec8a964167372d"} Oct 06 12:00:08 crc kubenswrapper[4698]: E1006 12:00:08.747612 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" podUID="744f45cb-8563-4bf2-90f1-59f2caa1e4f4" Oct 06 12:00:08 crc kubenswrapper[4698]: I1006 12:00:08.751234 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" event={"ID":"e2d5b718-b49a-46c0-9f1d-1e536ff62301","Type":"ContainerStarted","Data":"8cb1a67de77239788e9b11b1bb921ca79bbd04e93b9d456751d21fb7ad2b7665"} Oct 06 12:00:09 crc kubenswrapper[4698]: I1006 12:00:09.764314 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" event={"ID":"92e02173-4289-4b84-b3b2-01b78d0a7205","Type":"ContainerStarted","Data":"26e08d49ddfed44302713cec14881ec13914d0267fe3d5f0f56bcfce4a13b718"} Oct 06 12:00:09 crc kubenswrapper[4698]: I1006 12:00:09.768561 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" event={"ID":"110b7f13-850f-41a3-aadb-df0f5559ba1d","Type":"ContainerStarted","Data":"071bec638b4572b2bfc3eb54317b1b3e00ab681303dea469e9f9d669b14b13dc"} Oct 06 12:00:09 crc kubenswrapper[4698]: I1006 12:00:09.772919 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" event={"ID":"6fdb9f18-6759-435a-bae6-90271f8da5b0","Type":"ContainerStarted","Data":"fcabe2427904a41985538b0ad5d9f1a621dbb1d57c60bef5595b080e2a8dcd84"} Oct 06 12:00:09 crc kubenswrapper[4698]: I1006 12:00:09.781039 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" event={"ID":"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9","Type":"ContainerStarted","Data":"e065b7ba4b41294ed5bf6357d0f5412702a5a0aff26e9ea62f6e7dc10ac10443"} Oct 06 12:00:09 crc kubenswrapper[4698]: I1006 12:00:09.787110 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" event={"ID":"d2432ca3-e684-4c81-95c8-1e57826d09d6","Type":"ContainerStarted","Data":"9891cd309a0882b9c33f3ab3b055bc9ff10b252ee857786a258d5c9ef7bc17db"} Oct 06 12:00:09 crc kubenswrapper[4698]: I1006 12:00:09.790118 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" event={"ID":"6e97841f-b15e-4834-a445-d2a632d7021a","Type":"ContainerStarted","Data":"033d58d168a6bad4566a470de50c81e108484348087074a3fe9c77450e7d9c28"} Oct 06 12:00:09 crc kubenswrapper[4698]: I1006 12:00:09.792742 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" event={"ID":"0b715645-3bcb-4443-892b-e30062c78a7f","Type":"ContainerStarted","Data":"1de7c48072eed3875aa6e373ed42aacc8e8cba9f4d995355ffbe0cb8a53c0dbe"} Oct 06 12:00:09 crc kubenswrapper[4698]: I1006 12:00:09.805934 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" event={"ID":"d6f6350d-b33d-4ac5-b364-c80145b4b742","Type":"ContainerStarted","Data":"33277cd1d28d54e0868cf906abe0a71616a4939ad1196e7b0ac56b3258ee39c9"} Oct 06 12:00:09 crc kubenswrapper[4698]: I1006 12:00:09.844192 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" event={"ID":"b632a477-335c-4b0e-a83e-3812409b8afa","Type":"ContainerStarted","Data":"a31c553af724fbdb09c2af6f26beaffef0166be72f0d847c79e5b47caff25416"} Oct 06 12:00:09 crc kubenswrapper[4698]: I1006 12:00:09.852914 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" event={"ID":"fa0c0f93-841b-4e62-becb-32dcf40ae439","Type":"ContainerStarted","Data":"90234e909d871933bf74732eb1964649d80b5c51d609a7a01fc4aaa21ab438b3"} Oct 06 12:00:09 crc kubenswrapper[4698]: E1006 12:00:09.858884 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:5f96b563a63494082323bfced089d6589e0c89db43c6a39a2e912c79b1a278fe\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" podUID="9d910961-2283-4129-a2e0-6cec10da5779" Oct 06 12:00:09 crc kubenswrapper[4698]: E1006 12:00:09.858979 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" podUID="38d45acb-51da-4535-a6a8-a317360f96fd" Oct 06 12:00:09 crc kubenswrapper[4698]: E1006 12:00:09.859031 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" podUID="744f45cb-8563-4bf2-90f1-59f2caa1e4f4" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.884847 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" event={"ID":"d2432ca3-e684-4c81-95c8-1e57826d09d6","Type":"ContainerStarted","Data":"75446e3062cc20a64cacf0c883c634da81a7a73fd99364f74386f7d313bc55e5"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.885343 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.894086 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" event={"ID":"437c5088-93d6-4331-8671-e4e537e553a7","Type":"ContainerStarted","Data":"2cea8f7e1275e4130375c63e2a95510105a338e73f824ddcfce3882cc2083c67"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.894119 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" event={"ID":"437c5088-93d6-4331-8671-e4e537e553a7","Type":"ContainerStarted","Data":"02e26e7b7f60f3b836454e6660400ce05502077a4631071957e7bddf60f4afab"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.894299 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.899930 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" event={"ID":"6e97841f-b15e-4834-a445-d2a632d7021a","Type":"ContainerStarted","Data":"38b9468d6689c18021b6f5998d5af127469dd08627d76fce19e0929ec7d0d874"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.900249 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.913619 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" event={"ID":"6fdb9f18-6759-435a-bae6-90271f8da5b0","Type":"ContainerStarted","Data":"9bb39556dd51436caf1dcb3e56518543b619e7d64daec8c792948bbb5481a01a"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.914513 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.919865 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" podStartSLOduration=4.800883065 podStartE2EDuration="19.919848173s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:53.034094982 +0000 UTC m=+880.446787155" lastFinishedPulling="2025-10-06 12:00:08.15306009 +0000 UTC m=+895.565752263" observedRunningTime="2025-10-06 12:00:10.918571271 +0000 UTC m=+898.331263454" watchObservedRunningTime="2025-10-06 12:00:10.919848173 +0000 UTC m=+898.332540346" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.929410 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" event={"ID":"b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1","Type":"ContainerStarted","Data":"d06e8f8c523051344deb3d887b879745a7a48c16b3c59f41cf3136e7c73dd9e5"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.929454 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" event={"ID":"b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1","Type":"ContainerStarted","Data":"15c17740436e832f711f8e4c71154e1d31e37ef4eda22d16c0ee92b486a96c28"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.930310 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.933451 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" event={"ID":"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9","Type":"ContainerStarted","Data":"c00346d332ba6bd78b557398ff2ddd1296d0ec260bfd8d6529b237f9de9a668b"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.935506 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" event={"ID":"d6f6350d-b33d-4ac5-b364-c80145b4b742","Type":"ContainerStarted","Data":"4afb8021c20a6876b2f416bc82e82341009323f5069680d0262863ef53a05eaf"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.935934 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.948437 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" event={"ID":"b632a477-335c-4b0e-a83e-3812409b8afa","Type":"ContainerStarted","Data":"41125685b7017fc023588ae01d55b8283143a15569d38c3ddcafb3ed95d8200f"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.949267 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.951362 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" event={"ID":"92e02173-4289-4b84-b3b2-01b78d0a7205","Type":"ContainerStarted","Data":"cad1e8e18577c7c28606b2d270a5e9a64d4935ddf583ea0c81da4ad6e69b3042"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.951763 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.958419 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" event={"ID":"56e863a6-f963-4d2f-9de6-7805ff14e90a","Type":"ContainerStarted","Data":"d8a165c1f70205c66826918337fb4b06e772a680d87396ff54b50567baeb49a0"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.959445 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.963901 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" podStartSLOduration=4.925654569 podStartE2EDuration="18.963873569s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.152522257 +0000 UTC m=+881.565214430" lastFinishedPulling="2025-10-06 12:00:08.190741237 +0000 UTC m=+895.603433430" observedRunningTime="2025-10-06 12:00:10.952806281 +0000 UTC m=+898.365498454" watchObservedRunningTime="2025-10-06 12:00:10.963873569 +0000 UTC m=+898.376565742" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.973202 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" event={"ID":"e2d5b718-b49a-46c0-9f1d-1e536ff62301","Type":"ContainerStarted","Data":"bf8ec80fed2603dcf32c5128a9f6b72834afa69b8070bb914a6ac0ea3fac8c7a"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.974029 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.982378 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" podStartSLOduration=4.925778802 podStartE2EDuration="18.982355884s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.137290614 +0000 UTC m=+881.549982787" lastFinishedPulling="2025-10-06 12:00:08.193867706 +0000 UTC m=+895.606559869" observedRunningTime="2025-10-06 12:00:10.980686512 +0000 UTC m=+898.393378685" watchObservedRunningTime="2025-10-06 12:00:10.982355884 +0000 UTC m=+898.395048057" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.988436 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" event={"ID":"110b7f13-850f-41a3-aadb-df0f5559ba1d","Type":"ContainerStarted","Data":"f62ff24e0aa6f97c03c93f5db871907ca906014af9a9049ffaf5cd089caf6121"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.989321 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.997648 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" event={"ID":"fa0c0f93-841b-4e62-becb-32dcf40ae439","Type":"ContainerStarted","Data":"f02af441168992ceee1db7ceecf173fb9b62d6766fbafae81d7570b56262ae10"} Oct 06 12:00:10 crc kubenswrapper[4698]: I1006 12:00:10.997836 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.007992 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" podStartSLOduration=5.144994294 podStartE2EDuration="19.007971428s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.329321292 +0000 UTC m=+881.742013465" lastFinishedPulling="2025-10-06 12:00:08.192298426 +0000 UTC m=+895.604990599" observedRunningTime="2025-10-06 12:00:11.002175522 +0000 UTC m=+898.414867695" watchObservedRunningTime="2025-10-06 12:00:11.007971428 +0000 UTC m=+898.420663601" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.013476 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" event={"ID":"782cf4ae-9b34-46e9-9bfc-c7da6118c2dc","Type":"ContainerStarted","Data":"a771b4adffab15ac95ea1b62f05437c84c99f68124c85bb543dec1f322a3c6de"} Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.013551 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" event={"ID":"782cf4ae-9b34-46e9-9bfc-c7da6118c2dc","Type":"ContainerStarted","Data":"66c433b7203863a4fbea3a5a7cde8ff824ee061cd260472216a354a0f6033e7c"} Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.015128 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.026807 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" event={"ID":"0b715645-3bcb-4443-892b-e30062c78a7f","Type":"ContainerStarted","Data":"821ede6a6aee1980841d5f41387d9d458d97e87af045e09873152d1122a6f59d"} Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.027054 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.035321 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" podStartSLOduration=4.916936421 podStartE2EDuration="20.035308945s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:53.073889872 +0000 UTC m=+880.486582045" lastFinishedPulling="2025-10-06 12:00:08.192262376 +0000 UTC m=+895.604954569" observedRunningTime="2025-10-06 12:00:11.030641228 +0000 UTC m=+898.443333391" watchObservedRunningTime="2025-10-06 12:00:11.035308945 +0000 UTC m=+898.448001118" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.074597 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" podStartSLOduration=5.046113738 podStartE2EDuration="19.074568592s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.124620456 +0000 UTC m=+881.537312629" lastFinishedPulling="2025-10-06 12:00:08.15307528 +0000 UTC m=+895.565767483" observedRunningTime="2025-10-06 12:00:11.056828627 +0000 UTC m=+898.469520810" watchObservedRunningTime="2025-10-06 12:00:11.074568592 +0000 UTC m=+898.487260765" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.088692 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" podStartSLOduration=5.276486031 podStartE2EDuration="20.088651156s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:53.340902545 +0000 UTC m=+880.753594718" lastFinishedPulling="2025-10-06 12:00:08.15306767 +0000 UTC m=+895.565759843" observedRunningTime="2025-10-06 12:00:11.085898497 +0000 UTC m=+898.498590670" watchObservedRunningTime="2025-10-06 12:00:11.088651156 +0000 UTC m=+898.501343329" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.115303 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" podStartSLOduration=5.714065581 podStartE2EDuration="20.115283755s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:53.807572586 +0000 UTC m=+881.220264759" lastFinishedPulling="2025-10-06 12:00:08.20879076 +0000 UTC m=+895.621482933" observedRunningTime="2025-10-06 12:00:11.113958712 +0000 UTC m=+898.526650885" watchObservedRunningTime="2025-10-06 12:00:11.115283755 +0000 UTC m=+898.527975928" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.164847 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" podStartSLOduration=5.120180342 podStartE2EDuration="20.164826182s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:53.16411692 +0000 UTC m=+880.576809083" lastFinishedPulling="2025-10-06 12:00:08.20876274 +0000 UTC m=+895.621454923" observedRunningTime="2025-10-06 12:00:11.163396056 +0000 UTC m=+898.576088229" watchObservedRunningTime="2025-10-06 12:00:11.164826182 +0000 UTC m=+898.577518355" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.183927 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" podStartSLOduration=5.20204296 podStartE2EDuration="20.183907751s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:53.228283993 +0000 UTC m=+880.640976156" lastFinishedPulling="2025-10-06 12:00:08.210148754 +0000 UTC m=+895.622840947" observedRunningTime="2025-10-06 12:00:11.182393723 +0000 UTC m=+898.595085896" watchObservedRunningTime="2025-10-06 12:00:11.183907751 +0000 UTC m=+898.596599924" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.205230 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" podStartSLOduration=11.205204696 podStartE2EDuration="11.205204696s" podCreationTimestamp="2025-10-06 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:00:11.195944844 +0000 UTC m=+898.608637017" watchObservedRunningTime="2025-10-06 12:00:11.205204696 +0000 UTC m=+898.617896869" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.218110 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" podStartSLOduration=5.764201581 podStartE2EDuration="20.21808328s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:53.777852989 +0000 UTC m=+881.190545162" lastFinishedPulling="2025-10-06 12:00:08.231734688 +0000 UTC m=+895.644426861" observedRunningTime="2025-10-06 12:00:11.214866489 +0000 UTC m=+898.627558662" watchObservedRunningTime="2025-10-06 12:00:11.21808328 +0000 UTC m=+898.630775453" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.242386 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" podStartSLOduration=5.578646387 podStartE2EDuration="20.24236502s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:53.529113886 +0000 UTC m=+880.941806059" lastFinishedPulling="2025-10-06 12:00:08.192832499 +0000 UTC m=+895.605524692" observedRunningTime="2025-10-06 12:00:11.234462351 +0000 UTC m=+898.647154534" watchObservedRunningTime="2025-10-06 12:00:11.24236502 +0000 UTC m=+898.655057193" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.292133 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" podStartSLOduration=5.169400269 podStartE2EDuration="19.292108601s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.132188877 +0000 UTC m=+881.544881050" lastFinishedPulling="2025-10-06 12:00:08.254897179 +0000 UTC m=+895.667589382" observedRunningTime="2025-10-06 12:00:11.262379633 +0000 UTC m=+898.675071806" watchObservedRunningTime="2025-10-06 12:00:11.292108601 +0000 UTC m=+898.704800774" Oct 06 12:00:11 crc kubenswrapper[4698]: I1006 12:00:11.301427 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" podStartSLOduration=5.908641842 podStartE2EDuration="20.301400514s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:53.801560615 +0000 UTC m=+881.214252788" lastFinishedPulling="2025-10-06 12:00:08.194319287 +0000 UTC m=+895.607011460" observedRunningTime="2025-10-06 12:00:11.290690746 +0000 UTC m=+898.703382919" watchObservedRunningTime="2025-10-06 12:00:11.301400514 +0000 UTC m=+898.714092687" Oct 06 12:00:12 crc kubenswrapper[4698]: I1006 12:00:12.037364 4698 generic.go:334] "Generic (PLEG): container finished" podID="ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9" containerID="c00346d332ba6bd78b557398ff2ddd1296d0ec260bfd8d6529b237f9de9a668b" exitCode=0 Oct 06 12:00:12 crc kubenswrapper[4698]: I1006 12:00:12.037588 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" event={"ID":"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9","Type":"ContainerDied","Data":"c00346d332ba6bd78b557398ff2ddd1296d0ec260bfd8d6529b237f9de9a668b"} Oct 06 12:00:13 crc kubenswrapper[4698]: I1006 12:00:13.053286 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-649675d675-4zdhn" Oct 06 12:00:13 crc kubenswrapper[4698]: I1006 12:00:13.053659 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gprg9" Oct 06 12:00:13 crc kubenswrapper[4698]: I1006 12:00:13.054247 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-j9bnp" Oct 06 12:00:13 crc kubenswrapper[4698]: I1006 12:00:13.056488 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-25w45" Oct 06 12:00:13 crc kubenswrapper[4698]: I1006 12:00:13.056530 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-tnv74" Oct 06 12:00:14 crc kubenswrapper[4698]: I1006 12:00:14.318896 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:14 crc kubenswrapper[4698]: I1006 12:00:14.442501 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsdp7\" (UniqueName: \"kubernetes.io/projected/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-kube-api-access-gsdp7\") pod \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " Oct 06 12:00:14 crc kubenswrapper[4698]: I1006 12:00:14.442636 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-config-volume\") pod \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " Oct 06 12:00:14 crc kubenswrapper[4698]: I1006 12:00:14.442763 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-secret-volume\") pod \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\" (UID: \"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9\") " Oct 06 12:00:14 crc kubenswrapper[4698]: I1006 12:00:14.443485 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-config-volume" (OuterVolumeSpecName: "config-volume") pod "ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9" (UID: "ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:00:14 crc kubenswrapper[4698]: I1006 12:00:14.449161 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-kube-api-access-gsdp7" (OuterVolumeSpecName: "kube-api-access-gsdp7") pod "ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9" (UID: "ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9"). InnerVolumeSpecName "kube-api-access-gsdp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:00:14 crc kubenswrapper[4698]: I1006 12:00:14.452047 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9" (UID: "ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:00:14 crc kubenswrapper[4698]: I1006 12:00:14.545198 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsdp7\" (UniqueName: \"kubernetes.io/projected/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-kube-api-access-gsdp7\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:14 crc kubenswrapper[4698]: I1006 12:00:14.545237 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:14 crc kubenswrapper[4698]: I1006 12:00:14.545246 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:00:15 crc kubenswrapper[4698]: I1006 12:00:15.078906 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" event={"ID":"ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9","Type":"ContainerDied","Data":"e065b7ba4b41294ed5bf6357d0f5412702a5a0aff26e9ea62f6e7dc10ac10443"} Oct 06 12:00:15 crc kubenswrapper[4698]: I1006 12:00:15.078972 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e065b7ba4b41294ed5bf6357d0f5412702a5a0aff26e9ea62f6e7dc10ac10443" Oct 06 12:00:15 crc kubenswrapper[4698]: I1006 12:00:15.079039 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk" Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.107283 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" event={"ID":"572054de-889b-43ac-abb2-8bca55810d18","Type":"ContainerStarted","Data":"f5fc510666e8a9d370220f3f781f365aeb8c219fc6c2beeccc784e87e1aede37"} Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.122081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" event={"ID":"9543eb0d-82ab-4599-b094-8789588846af","Type":"ContainerStarted","Data":"2af20ebd41abbf23a3c0cc0d961abe6e730dc1e207e2ab6dc3a5cc5510b023cc"} Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.123288 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.131979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" event={"ID":"c03f5f3c-6e6c-4eba-9a1f-695c23c0d995","Type":"ContainerStarted","Data":"f660070f7146546d37091677f0ce9cca5fca265cbddf2034bf63ba50b907dd15"} Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.133615 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.136950 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-85kfz" podStartSLOduration=3.311762048 podStartE2EDuration="25.136938921s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.368824145 +0000 UTC m=+881.781516318" lastFinishedPulling="2025-10-06 12:00:16.194000978 +0000 UTC m=+903.606693191" observedRunningTime="2025-10-06 12:00:17.133230098 +0000 UTC m=+904.545922271" watchObservedRunningTime="2025-10-06 12:00:17.136938921 +0000 UTC m=+904.549631094" Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.141426 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" event={"ID":"6cd7d60a-943c-42e8-9b96-74e76f1338f6","Type":"ContainerStarted","Data":"b1d495cb88dcb7e67ba2dca7d89c4ecbe63fe3266bbc6093f4fa3315edc66ac6"} Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.141878 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.144549 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" event={"ID":"33858802-bf6b-42d2-bdc6-8ec2202dd1fe","Type":"ContainerStarted","Data":"5b0c696e8c9630d8265748dfcd3fb4492dd7cf023842d467a25addc95a0edaae"} Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.145187 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.170362 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" podStartSLOduration=4.149207362 podStartE2EDuration="26.170341751s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.383567795 +0000 UTC m=+881.796259968" lastFinishedPulling="2025-10-06 12:00:16.404702164 +0000 UTC m=+903.817394357" observedRunningTime="2025-10-06 12:00:17.166666578 +0000 UTC m=+904.579358751" watchObservedRunningTime="2025-10-06 12:00:17.170341751 +0000 UTC m=+904.583033924" Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.186782 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" podStartSLOduration=3.963257759 podStartE2EDuration="26.186765694s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.163489453 +0000 UTC m=+881.576181626" lastFinishedPulling="2025-10-06 12:00:16.386997338 +0000 UTC m=+903.799689561" observedRunningTime="2025-10-06 12:00:17.18304606 +0000 UTC m=+904.595738233" watchObservedRunningTime="2025-10-06 12:00:17.186765694 +0000 UTC m=+904.599457867" Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.205305 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" podStartSLOduration=3.465125705 podStartE2EDuration="25.20528516s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.410256416 +0000 UTC m=+881.822948589" lastFinishedPulling="2025-10-06 12:00:16.150415871 +0000 UTC m=+903.563108044" observedRunningTime="2025-10-06 12:00:17.200525719 +0000 UTC m=+904.613217892" watchObservedRunningTime="2025-10-06 12:00:17.20528516 +0000 UTC m=+904.617977333" Oct 06 12:00:17 crc kubenswrapper[4698]: I1006 12:00:17.220027 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" podStartSLOduration=3.222341591 podStartE2EDuration="25.219990149s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.366987118 +0000 UTC m=+881.779679291" lastFinishedPulling="2025-10-06 12:00:16.364635656 +0000 UTC m=+903.777327849" observedRunningTime="2025-10-06 12:00:17.216286326 +0000 UTC m=+904.628978499" watchObservedRunningTime="2025-10-06 12:00:17.219990149 +0000 UTC m=+904.632682322" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.092812 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-fgmjd" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.202844 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-689sr" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.415479 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-wvf75" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.432278 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-jncqt" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.519316 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-swz27" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.596415 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-w5cv6" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.753291 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-vwcqq" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.754967 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-c8sxb" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.873237 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-496mk" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.925299 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-r6ntv" Oct 06 12:00:22 crc kubenswrapper[4698]: I1006 12:00:22.979197 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-p5rgb" Oct 06 12:00:23 crc kubenswrapper[4698]: I1006 12:00:23.021843 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pnks4" Oct 06 12:00:23 crc kubenswrapper[4698]: I1006 12:00:23.062377 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-99f6c4584-gxz2f" Oct 06 12:00:25 crc kubenswrapper[4698]: I1006 12:00:25.235590 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:00:25 crc kubenswrapper[4698]: I1006 12:00:25.236143 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:00:26 crc kubenswrapper[4698]: I1006 12:00:26.231131 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" event={"ID":"9d910961-2283-4129-a2e0-6cec10da5779","Type":"ContainerStarted","Data":"74d66d54d78e756d3f16aa0804307120aa234f3ba3230b53ed24fb71b2a46bbf"} Oct 06 12:00:26 crc kubenswrapper[4698]: I1006 12:00:26.232353 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" Oct 06 12:00:26 crc kubenswrapper[4698]: I1006 12:00:26.234474 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" event={"ID":"38d45acb-51da-4535-a6a8-a317360f96fd","Type":"ContainerStarted","Data":"3c6a51dd1c7b9d62882146fb828a42086aab72622ffd8a751feacd28fd124578"} Oct 06 12:00:26 crc kubenswrapper[4698]: I1006 12:00:26.234660 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" Oct 06 12:00:26 crc kubenswrapper[4698]: I1006 12:00:26.237284 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" event={"ID":"744f45cb-8563-4bf2-90f1-59f2caa1e4f4","Type":"ContainerStarted","Data":"4b0cd30ca1902db5097ff9e496d312f1cf570b4879c56b2cb23b6baebb5bf50d"} Oct 06 12:00:26 crc kubenswrapper[4698]: I1006 12:00:26.237533 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 12:00:26 crc kubenswrapper[4698]: I1006 12:00:26.257484 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" podStartSLOduration=3.098687784 podStartE2EDuration="35.257451478s" podCreationTimestamp="2025-10-06 11:59:51 +0000 UTC" firstStartedPulling="2025-10-06 11:59:53.609478376 +0000 UTC m=+881.022170549" lastFinishedPulling="2025-10-06 12:00:25.76824204 +0000 UTC m=+913.180934243" observedRunningTime="2025-10-06 12:00:26.252230086 +0000 UTC m=+913.664922259" watchObservedRunningTime="2025-10-06 12:00:26.257451478 +0000 UTC m=+913.670143651" Oct 06 12:00:26 crc kubenswrapper[4698]: I1006 12:00:26.282517 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" podStartSLOduration=2.66778078 podStartE2EDuration="34.282494837s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="2025-10-06 11:59:54.154199729 +0000 UTC m=+881.566891902" lastFinishedPulling="2025-10-06 12:00:25.768913746 +0000 UTC m=+913.181605959" observedRunningTime="2025-10-06 12:00:26.278345333 +0000 UTC m=+913.691037516" watchObservedRunningTime="2025-10-06 12:00:26.282494837 +0000 UTC m=+913.695187010" Oct 06 12:00:26 crc kubenswrapper[4698]: I1006 12:00:26.316447 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" podStartSLOduration=3.428889954 podStartE2EDuration="34.31641918s" podCreationTimestamp="2025-10-06 11:59:52 +0000 UTC" firstStartedPulling="2025-10-06 11:59:55.006139426 +0000 UTC m=+882.418831599" lastFinishedPulling="2025-10-06 12:00:25.893668612 +0000 UTC m=+913.306360825" observedRunningTime="2025-10-06 12:00:26.311453666 +0000 UTC m=+913.724145859" watchObservedRunningTime="2025-10-06 12:00:26.31641918 +0000 UTC m=+913.729111373" Oct 06 12:00:32 crc kubenswrapper[4698]: I1006 12:00:32.561554 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-hg86j" Oct 06 12:00:32 crc kubenswrapper[4698]: I1006 12:00:32.672727 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rgxcq" Oct 06 12:00:34 crc kubenswrapper[4698]: I1006 12:00:34.203847 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.473402 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gs8zz"] Oct 06 12:00:54 crc kubenswrapper[4698]: E1006 12:00:54.475460 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9" containerName="collect-profiles" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.475481 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9" containerName="collect-profiles" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.475768 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9" containerName="collect-profiles" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.482911 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.485367 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.485849 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5lbvp" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.486093 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.486287 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.498738 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gs8zz"] Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.563674 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8575b"] Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.565375 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.575105 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4991df9-94ab-4521-9b5e-8621f5a28d48-config\") pod \"dnsmasq-dns-675f4bcbfc-gs8zz\" (UID: \"c4991df9-94ab-4521-9b5e-8621f5a28d48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.575184 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr872\" (UniqueName: \"kubernetes.io/projected/c4991df9-94ab-4521-9b5e-8621f5a28d48-kube-api-access-mr872\") pod \"dnsmasq-dns-675f4bcbfc-gs8zz\" (UID: \"c4991df9-94ab-4521-9b5e-8621f5a28d48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.578399 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.600916 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8575b"] Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.678167 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-config\") pod \"dnsmasq-dns-78dd6ddcc-8575b\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.678247 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4991df9-94ab-4521-9b5e-8621f5a28d48-config\") pod \"dnsmasq-dns-675f4bcbfc-gs8zz\" (UID: \"c4991df9-94ab-4521-9b5e-8621f5a28d48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.678285 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8575b\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.678323 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr872\" (UniqueName: \"kubernetes.io/projected/c4991df9-94ab-4521-9b5e-8621f5a28d48-kube-api-access-mr872\") pod \"dnsmasq-dns-675f4bcbfc-gs8zz\" (UID: \"c4991df9-94ab-4521-9b5e-8621f5a28d48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.678351 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vv8t\" (UniqueName: \"kubernetes.io/projected/b658c53d-baaf-4ecf-974e-b52a817c9eb6-kube-api-access-2vv8t\") pod \"dnsmasq-dns-78dd6ddcc-8575b\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.679711 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4991df9-94ab-4521-9b5e-8621f5a28d48-config\") pod \"dnsmasq-dns-675f4bcbfc-gs8zz\" (UID: \"c4991df9-94ab-4521-9b5e-8621f5a28d48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.728070 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr872\" (UniqueName: \"kubernetes.io/projected/c4991df9-94ab-4521-9b5e-8621f5a28d48-kube-api-access-mr872\") pod \"dnsmasq-dns-675f4bcbfc-gs8zz\" (UID: \"c4991df9-94ab-4521-9b5e-8621f5a28d48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.779996 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-config\") pod \"dnsmasq-dns-78dd6ddcc-8575b\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.780084 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8575b\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.780119 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vv8t\" (UniqueName: \"kubernetes.io/projected/b658c53d-baaf-4ecf-974e-b52a817c9eb6-kube-api-access-2vv8t\") pod \"dnsmasq-dns-78dd6ddcc-8575b\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.781766 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8575b\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.782052 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-config\") pod \"dnsmasq-dns-78dd6ddcc-8575b\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.811082 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vv8t\" (UniqueName: \"kubernetes.io/projected/b658c53d-baaf-4ecf-974e-b52a817c9eb6-kube-api-access-2vv8t\") pod \"dnsmasq-dns-78dd6ddcc-8575b\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.818137 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" Oct 06 12:00:54 crc kubenswrapper[4698]: I1006 12:00:54.891192 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.235070 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.235622 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.235704 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.236572 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96d00a48231f38aebcbe03f0402869c4d8faf731935340087e25c0cea08f5f67"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.236636 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://96d00a48231f38aebcbe03f0402869c4d8faf731935340087e25c0cea08f5f67" gracePeriod=600 Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.306146 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gs8zz"] Oct 06 12:00:55 crc kubenswrapper[4698]: W1006 12:00:55.312148 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4991df9_94ab_4521_9b5e_8621f5a28d48.slice/crio-4742ab108226be35954a350954971c28ee6132311f838a4ffc1be60217d91e47 WatchSource:0}: Error finding container 4742ab108226be35954a350954971c28ee6132311f838a4ffc1be60217d91e47: Status 404 returned error can't find the container with id 4742ab108226be35954a350954971c28ee6132311f838a4ffc1be60217d91e47 Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.387944 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8575b"] Oct 06 12:00:55 crc kubenswrapper[4698]: W1006 12:00:55.399210 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb658c53d_baaf_4ecf_974e_b52a817c9eb6.slice/crio-6513d7ee9e8c1cacd47ccbbf0c67db049e94cb924c2bb8dd2d9e0a4a6ae96c75 WatchSource:0}: Error finding container 6513d7ee9e8c1cacd47ccbbf0c67db049e94cb924c2bb8dd2d9e0a4a6ae96c75: Status 404 returned error can't find the container with id 6513d7ee9e8c1cacd47ccbbf0c67db049e94cb924c2bb8dd2d9e0a4a6ae96c75 Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.567504 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" event={"ID":"b658c53d-baaf-4ecf-974e-b52a817c9eb6","Type":"ContainerStarted","Data":"6513d7ee9e8c1cacd47ccbbf0c67db049e94cb924c2bb8dd2d9e0a4a6ae96c75"} Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.569860 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" event={"ID":"c4991df9-94ab-4521-9b5e-8621f5a28d48","Type":"ContainerStarted","Data":"4742ab108226be35954a350954971c28ee6132311f838a4ffc1be60217d91e47"} Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.574453 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="96d00a48231f38aebcbe03f0402869c4d8faf731935340087e25c0cea08f5f67" exitCode=0 Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.574508 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"96d00a48231f38aebcbe03f0402869c4d8faf731935340087e25c0cea08f5f67"} Oct 06 12:00:55 crc kubenswrapper[4698]: I1006 12:00:55.574549 4698 scope.go:117] "RemoveContainer" containerID="8a16d893c0f7a2a418c0d8f658e6ae120b01ba5c1a19fd9cf040618be38aa7ba" Oct 06 12:00:56 crc kubenswrapper[4698]: I1006 12:00:56.588914 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"08949ee05d365e895ee66ed6a6e38acc8b8b1f686a7e426a5dbaacabe5cc7044"} Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.547501 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gs8zz"] Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.586646 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-n57gb"] Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.588860 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.598323 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-n57gb"] Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.736189 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-n57gb\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.736250 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-config\") pod \"dnsmasq-dns-5ccc8479f9-n57gb\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.736303 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l7kn\" (UniqueName: \"kubernetes.io/projected/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-kube-api-access-8l7kn\") pod \"dnsmasq-dns-5ccc8479f9-n57gb\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.838841 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-n57gb\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.840274 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-config\") pod \"dnsmasq-dns-5ccc8479f9-n57gb\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.839931 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-n57gb\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.840998 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-config\") pod \"dnsmasq-dns-5ccc8479f9-n57gb\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.841396 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l7kn\" (UniqueName: \"kubernetes.io/projected/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-kube-api-access-8l7kn\") pod \"dnsmasq-dns-5ccc8479f9-n57gb\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.885657 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l7kn\" (UniqueName: \"kubernetes.io/projected/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-kube-api-access-8l7kn\") pod \"dnsmasq-dns-5ccc8479f9-n57gb\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.889374 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8575b"] Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.935273 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.935915 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8vjmg"] Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.937744 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:57 crc kubenswrapper[4698]: I1006 12:00:57.956450 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8vjmg"] Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.045299 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8vjmg\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.045362 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjjvz\" (UniqueName: \"kubernetes.io/projected/dbe78f3b-9af0-4900-9f02-13a593444d42-kube-api-access-zjjvz\") pod \"dnsmasq-dns-57d769cc4f-8vjmg\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.045427 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-config\") pod \"dnsmasq-dns-57d769cc4f-8vjmg\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.146630 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-config\") pod \"dnsmasq-dns-57d769cc4f-8vjmg\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.147114 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8vjmg\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.147172 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjjvz\" (UniqueName: \"kubernetes.io/projected/dbe78f3b-9af0-4900-9f02-13a593444d42-kube-api-access-zjjvz\") pod \"dnsmasq-dns-57d769cc4f-8vjmg\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.148238 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-config\") pod \"dnsmasq-dns-57d769cc4f-8vjmg\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.148333 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8vjmg\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.188749 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjjvz\" (UniqueName: \"kubernetes.io/projected/dbe78f3b-9af0-4900-9f02-13a593444d42-kube-api-access-zjjvz\") pod \"dnsmasq-dns-57d769cc4f-8vjmg\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.286233 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.565954 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-n57gb"] Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.604611 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8vjmg"] Oct 06 12:00:58 crc kubenswrapper[4698]: W1006 12:00:58.619694 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbe78f3b_9af0_4900_9f02_13a593444d42.slice/crio-344b865657ef33b68829289c4b3b2ad284f1accd69cbea3f5f6485fd35e7492b WatchSource:0}: Error finding container 344b865657ef33b68829289c4b3b2ad284f1accd69cbea3f5f6485fd35e7492b: Status 404 returned error can't find the container with id 344b865657ef33b68829289c4b3b2ad284f1accd69cbea3f5f6485fd35e7492b Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.637324 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" event={"ID":"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c","Type":"ContainerStarted","Data":"1e608e1f621b5591f06ecac76f01c27f3839c1f689620f861bb31fd0f78af74a"} Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.710893 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.713701 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.716646 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.716740 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.717254 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.717739 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r9gc7" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.717737 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.717952 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.718291 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.720807 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.867119 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.867171 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.867195 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.867230 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90c98585-3fd3-42cb-b011-01ecd1227057-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.867325 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.867524 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.869298 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s24kg\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-kube-api-access-s24kg\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.869366 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.869414 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90c98585-3fd3-42cb-b011-01ecd1227057-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.869477 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.869504 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970532 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970583 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970605 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970631 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90c98585-3fd3-42cb-b011-01ecd1227057-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970665 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970693 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970734 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s24kg\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-kube-api-access-s24kg\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970757 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970781 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90c98585-3fd3-42cb-b011-01ecd1227057-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970808 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.970828 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.972176 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.972571 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.972846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.973286 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.974137 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.974521 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:58 crc kubenswrapper[4698]: I1006 12:00:58.984924 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.001427 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90c98585-3fd3-42cb-b011-01ecd1227057-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.001567 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90c98585-3fd3-42cb-b011-01ecd1227057-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.002500 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s24kg\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-kube-api-access-s24kg\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.002860 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.018406 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.044973 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.047293 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.048722 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.052180 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.056099 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.057419 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.057908 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.058245 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.058949 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.059470 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-29ppb" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.068988 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.175133 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.175489 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6jph\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-kube-api-access-h6jph\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.175557 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.175626 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.175802 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-config-data\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.175893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.176081 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.176148 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4815e17b-a929-4914-91e6-6e9b3ef94561-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.176397 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.176458 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.176495 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4815e17b-a929-4914-91e6-6e9b3ef94561-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278260 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4815e17b-a929-4914-91e6-6e9b3ef94561-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278278 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278312 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278332 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6jph\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-kube-api-access-h6jph\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278445 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278498 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278546 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-config-data\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278566 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278870 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.278886 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4815e17b-a929-4914-91e6-6e9b3ef94561-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.279500 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.281337 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.282372 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.282728 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.283582 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.287788 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4815e17b-a929-4914-91e6-6e9b3ef94561-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.289799 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.292420 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-config-data\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.293272 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4815e17b-a929-4914-91e6-6e9b3ef94561-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.294171 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.314574 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.316417 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6jph\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-kube-api-access-h6jph\") pod \"rabbitmq-server-0\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.426650 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.618376 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:00:59 crc kubenswrapper[4698]: I1006 12:00:59.647553 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" event={"ID":"dbe78f3b-9af0-4900-9f02-13a593444d42","Type":"ContainerStarted","Data":"344b865657ef33b68829289c4b3b2ad284f1accd69cbea3f5f6485fd35e7492b"} Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.780232 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.785892 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.788580 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-b7hhp" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.788645 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.788846 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.789145 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.791399 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.802596 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.809716 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.925858 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.927432 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.933269 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.933608 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.934552 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-5ccnt" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.934649 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fa86326e-abe0-482b-94db-4579c8dfbc66-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.934707 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa86326e-abe0-482b-94db-4579c8dfbc66-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.934743 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.934931 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa86326e-abe0-482b-94db-4579c8dfbc66-kolla-config\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.934979 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fa86326e-abe0-482b-94db-4579c8dfbc66-secrets\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.935067 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fa86326e-abe0-482b-94db-4579c8dfbc66-config-data-default\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.935185 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa86326e-abe0-482b-94db-4579c8dfbc66-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.935303 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxx6q\" (UniqueName: \"kubernetes.io/projected/fa86326e-abe0-482b-94db-4579c8dfbc66-kube-api-access-qxx6q\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.935335 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa86326e-abe0-482b-94db-4579c8dfbc66-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.935672 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 06 12:01:01 crc kubenswrapper[4698]: I1006 12:01:01.937161 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.039747 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa86326e-abe0-482b-94db-4579c8dfbc66-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.039815 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fa86326e-abe0-482b-94db-4579c8dfbc66-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.039846 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b6df0e48-e5a1-42b9-a3f9-712a00716e38-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.039875 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa86326e-abe0-482b-94db-4579c8dfbc66-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.039900 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.039926 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6df0e48-e5a1-42b9-a3f9-712a00716e38-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.039959 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa86326e-abe0-482b-94db-4579c8dfbc66-kolla-config\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.039976 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fa86326e-abe0-482b-94db-4579c8dfbc66-secrets\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.039997 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b6df0e48-e5a1-42b9-a3f9-712a00716e38-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.040097 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fa86326e-abe0-482b-94db-4579c8dfbc66-config-data-default\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.040128 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6df0e48-e5a1-42b9-a3f9-712a00716e38-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.040149 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/b6df0e48-e5a1-42b9-a3f9-712a00716e38-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.040176 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6df0e48-e5a1-42b9-a3f9-712a00716e38-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.040193 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjd68\" (UniqueName: \"kubernetes.io/projected/b6df0e48-e5a1-42b9-a3f9-712a00716e38-kube-api-access-bjd68\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.040216 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6df0e48-e5a1-42b9-a3f9-712a00716e38-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.040234 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa86326e-abe0-482b-94db-4579c8dfbc66-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.040264 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.040292 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxx6q\" (UniqueName: \"kubernetes.io/projected/fa86326e-abe0-482b-94db-4579c8dfbc66-kube-api-access-qxx6q\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.041621 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fa86326e-abe0-482b-94db-4579c8dfbc66-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.043823 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.044504 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa86326e-abe0-482b-94db-4579c8dfbc66-kolla-config\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.044517 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fa86326e-abe0-482b-94db-4579c8dfbc66-config-data-default\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.047621 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa86326e-abe0-482b-94db-4579c8dfbc66-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.055025 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fa86326e-abe0-482b-94db-4579c8dfbc66-secrets\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.057374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa86326e-abe0-482b-94db-4579c8dfbc66-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.057992 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa86326e-abe0-482b-94db-4579c8dfbc66-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.072831 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxx6q\" (UniqueName: \"kubernetes.io/projected/fa86326e-abe0-482b-94db-4579c8dfbc66-kube-api-access-qxx6q\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.080718 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"fa86326e-abe0-482b-94db-4579c8dfbc66\") " pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.148416 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6df0e48-e5a1-42b9-a3f9-712a00716e38-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.148481 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.148523 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b6df0e48-e5a1-42b9-a3f9-712a00716e38-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.148576 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6df0e48-e5a1-42b9-a3f9-712a00716e38-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.148611 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b6df0e48-e5a1-42b9-a3f9-712a00716e38-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.148642 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6df0e48-e5a1-42b9-a3f9-712a00716e38-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.148663 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/b6df0e48-e5a1-42b9-a3f9-712a00716e38-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.148688 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjd68\" (UniqueName: \"kubernetes.io/projected/b6df0e48-e5a1-42b9-a3f9-712a00716e38-kube-api-access-bjd68\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.148710 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6df0e48-e5a1-42b9-a3f9-712a00716e38-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.150149 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6df0e48-e5a1-42b9-a3f9-712a00716e38-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.150393 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6df0e48-e5a1-42b9-a3f9-712a00716e38-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.150485 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b6df0e48-e5a1-42b9-a3f9-712a00716e38-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.150719 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.151030 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.151261 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b6df0e48-e5a1-42b9-a3f9-712a00716e38-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.156818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/b6df0e48-e5a1-42b9-a3f9-712a00716e38-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.157119 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6df0e48-e5a1-42b9-a3f9-712a00716e38-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.168443 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjd68\" (UniqueName: \"kubernetes.io/projected/b6df0e48-e5a1-42b9-a3f9-712a00716e38-kube-api-access-bjd68\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.169772 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6df0e48-e5a1-42b9-a3f9-712a00716e38-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.219984 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.220285 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b6df0e48-e5a1-42b9-a3f9-712a00716e38\") " pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.221721 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.225480 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rzf7m" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.225697 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.228605 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.230968 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.260642 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.355307 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa023504-b5d3-415a-a98c-8771aac74c06-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.355403 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa023504-b5d3-415a-a98c-8771aac74c06-config-data\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.355451 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa023504-b5d3-415a-a98c-8771aac74c06-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.355480 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa023504-b5d3-415a-a98c-8771aac74c06-kolla-config\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.355700 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94q7g\" (UniqueName: \"kubernetes.io/projected/fa023504-b5d3-415a-a98c-8771aac74c06-kube-api-access-94q7g\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.457223 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa023504-b5d3-415a-a98c-8771aac74c06-config-data\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.457310 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa023504-b5d3-415a-a98c-8771aac74c06-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.457334 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa023504-b5d3-415a-a98c-8771aac74c06-kolla-config\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.457376 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94q7g\" (UniqueName: \"kubernetes.io/projected/fa023504-b5d3-415a-a98c-8771aac74c06-kube-api-access-94q7g\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.457472 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa023504-b5d3-415a-a98c-8771aac74c06-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.458451 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa023504-b5d3-415a-a98c-8771aac74c06-config-data\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.458520 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa023504-b5d3-415a-a98c-8771aac74c06-kolla-config\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.461901 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa023504-b5d3-415a-a98c-8771aac74c06-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.464441 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa023504-b5d3-415a-a98c-8771aac74c06-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.483946 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94q7g\" (UniqueName: \"kubernetes.io/projected/fa023504-b5d3-415a-a98c-8771aac74c06-kube-api-access-94q7g\") pod \"memcached-0\" (UID: \"fa023504-b5d3-415a-a98c-8771aac74c06\") " pod="openstack/memcached-0" Oct 06 12:01:02 crc kubenswrapper[4698]: I1006 12:01:02.577500 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 06 12:01:04 crc kubenswrapper[4698]: I1006 12:01:04.030080 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:01:04 crc kubenswrapper[4698]: I1006 12:01:04.035609 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:01:04 crc kubenswrapper[4698]: I1006 12:01:04.038504 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-q4bbq" Oct 06 12:01:04 crc kubenswrapper[4698]: I1006 12:01:04.045787 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:01:04 crc kubenswrapper[4698]: I1006 12:01:04.091048 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9787m\" (UniqueName: \"kubernetes.io/projected/258084da-8b4a-484d-b10a-0511f89cac2f-kube-api-access-9787m\") pod \"kube-state-metrics-0\" (UID: \"258084da-8b4a-484d-b10a-0511f89cac2f\") " pod="openstack/kube-state-metrics-0" Oct 06 12:01:04 crc kubenswrapper[4698]: I1006 12:01:04.193081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9787m\" (UniqueName: \"kubernetes.io/projected/258084da-8b4a-484d-b10a-0511f89cac2f-kube-api-access-9787m\") pod \"kube-state-metrics-0\" (UID: \"258084da-8b4a-484d-b10a-0511f89cac2f\") " pod="openstack/kube-state-metrics-0" Oct 06 12:01:04 crc kubenswrapper[4698]: I1006 12:01:04.217385 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9787m\" (UniqueName: \"kubernetes.io/projected/258084da-8b4a-484d-b10a-0511f89cac2f-kube-api-access-9787m\") pod \"kube-state-metrics-0\" (UID: \"258084da-8b4a-484d-b10a-0511f89cac2f\") " pod="openstack/kube-state-metrics-0" Oct 06 12:01:04 crc kubenswrapper[4698]: I1006 12:01:04.353703 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.460781 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.464065 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.466812 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.467154 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-plfqm" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.468392 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.468793 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.469304 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.485189 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.489989 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.520506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.520839 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.521000 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aca88314-f6aa-4d15-8c81-2a4c66d4297f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.521111 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.521212 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.521329 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.521414 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.521506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmjtd\" (UniqueName: \"kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-kube-api-access-kmjtd\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.623796 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.623907 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aca88314-f6aa-4d15-8c81-2a4c66d4297f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.623949 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.624054 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.624277 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.624320 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.624363 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmjtd\" (UniqueName: \"kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-kube-api-access-kmjtd\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.625537 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aca88314-f6aa-4d15-8c81-2a4c66d4297f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.624416 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.629449 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.629497 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/44d562176676e8d8573ee8cf6c79a771697a87ae1dbf01fea0ea1f08f9081a45/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.632464 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.632464 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.635489 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.651700 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.652462 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.660540 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmjtd\" (UniqueName: \"kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-kube-api-access-kmjtd\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.665229 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.717386 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90c98585-3fd3-42cb-b011-01ecd1227057","Type":"ContainerStarted","Data":"c97ccd2616b8442690e0609c71eecdf793d9e598f92a8539a2647a743870bacb"} Oct 06 12:01:05 crc kubenswrapper[4698]: I1006 12:01:05.794823 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.891267 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.893923 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.897166 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.897382 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-lgwf2" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.897528 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.897649 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.897761 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.909305 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.992048 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3024f021-f705-443b-a7e1-bcb574c25fe7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.992142 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3024f021-f705-443b-a7e1-bcb574c25fe7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.992189 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3024f021-f705-443b-a7e1-bcb574c25fe7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.992206 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st8vq\" (UniqueName: \"kubernetes.io/projected/3024f021-f705-443b-a7e1-bcb574c25fe7-kube-api-access-st8vq\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.992235 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.992393 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3024f021-f705-443b-a7e1-bcb574c25fe7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.992424 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3024f021-f705-443b-a7e1-bcb574c25fe7-config\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:07 crc kubenswrapper[4698]: I1006 12:01:07.992490 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3024f021-f705-443b-a7e1-bcb574c25fe7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.093934 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3024f021-f705-443b-a7e1-bcb574c25fe7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.094087 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3024f021-f705-443b-a7e1-bcb574c25fe7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.094118 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3024f021-f705-443b-a7e1-bcb574c25fe7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.095156 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3024f021-f705-443b-a7e1-bcb574c25fe7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.095188 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st8vq\" (UniqueName: \"kubernetes.io/projected/3024f021-f705-443b-a7e1-bcb574c25fe7-kube-api-access-st8vq\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.096385 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.096416 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3024f021-f705-443b-a7e1-bcb574c25fe7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.096268 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3024f021-f705-443b-a7e1-bcb574c25fe7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.096614 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.097056 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3024f021-f705-443b-a7e1-bcb574c25fe7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.107162 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3024f021-f705-443b-a7e1-bcb574c25fe7-config\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.108353 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3024f021-f705-443b-a7e1-bcb574c25fe7-config\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.121234 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3024f021-f705-443b-a7e1-bcb574c25fe7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.124335 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3024f021-f705-443b-a7e1-bcb574c25fe7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.126337 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3024f021-f705-443b-a7e1-bcb574c25fe7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.127342 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st8vq\" (UniqueName: \"kubernetes.io/projected/3024f021-f705-443b-a7e1-bcb574c25fe7-kube-api-access-st8vq\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.139057 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3024f021-f705-443b-a7e1-bcb574c25fe7\") " pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.219595 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.223473 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qmjmg"] Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.225031 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.229516 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.229894 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nkj4d" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.230118 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.241177 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qmjmg"] Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.249508 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gx9kq"] Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.255429 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.273083 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gx9kq"] Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312184 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-var-run\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312245 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-var-log-ovn\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312304 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f95ws\" (UniqueName: \"kubernetes.io/projected/802f85d7-83b9-4361-ae5e-72d826586a43-kube-api-access-f95ws\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312345 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-var-lib\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312384 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-var-run\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312407 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8xlx\" (UniqueName: \"kubernetes.io/projected/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-kube-api-access-l8xlx\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312444 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/802f85d7-83b9-4361-ae5e-72d826586a43-scripts\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312469 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-combined-ca-bundle\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312499 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-var-log\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312528 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-ovn-controller-tls-certs\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312580 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-etc-ovs\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312612 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-scripts\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.312660 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-var-run-ovn\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.414752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f95ws\" (UniqueName: \"kubernetes.io/projected/802f85d7-83b9-4361-ae5e-72d826586a43-kube-api-access-f95ws\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.414828 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-var-lib\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.414878 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-var-run\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.414903 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8xlx\" (UniqueName: \"kubernetes.io/projected/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-kube-api-access-l8xlx\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.414932 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/802f85d7-83b9-4361-ae5e-72d826586a43-scripts\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.414949 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-combined-ca-bundle\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.414972 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-var-log\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.415004 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-ovn-controller-tls-certs\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.415071 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-etc-ovs\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.415093 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-scripts\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.415133 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-var-run-ovn\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.415167 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-var-run\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.415186 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-var-log-ovn\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.415827 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-var-log-ovn\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.415971 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-var-run\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.416009 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-var-run-ovn\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.416108 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-var-lib\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.416276 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-var-run\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.416276 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-var-log\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.416763 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/802f85d7-83b9-4361-ae5e-72d826586a43-etc-ovs\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.418790 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/802f85d7-83b9-4361-ae5e-72d826586a43-scripts\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.419057 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-scripts\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.429719 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-combined-ca-bundle\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.432222 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-ovn-controller-tls-certs\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.432925 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f95ws\" (UniqueName: \"kubernetes.io/projected/802f85d7-83b9-4361-ae5e-72d826586a43-kube-api-access-f95ws\") pod \"ovn-controller-ovs-gx9kq\" (UID: \"802f85d7-83b9-4361-ae5e-72d826586a43\") " pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.441971 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8xlx\" (UniqueName: \"kubernetes.io/projected/7dd3b0e2-4d06-4c91-8539-4db08c7f2d23-kube-api-access-l8xlx\") pod \"ovn-controller-qmjmg\" (UID: \"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23\") " pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.550601 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:08 crc kubenswrapper[4698]: I1006 12:01:08.579651 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.136806 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.140955 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.149104 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.149700 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tm8l9" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.150045 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.150739 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.158703 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.208188 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/513c9b58-394d-48dd-a0c9-7ea2f4643f25-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.208284 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j44kg\" (UniqueName: \"kubernetes.io/projected/513c9b58-394d-48dd-a0c9-7ea2f4643f25-kube-api-access-j44kg\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.208318 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/513c9b58-394d-48dd-a0c9-7ea2f4643f25-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.208336 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513c9b58-394d-48dd-a0c9-7ea2f4643f25-config\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.208411 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/513c9b58-394d-48dd-a0c9-7ea2f4643f25-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.208511 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.208533 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/513c9b58-394d-48dd-a0c9-7ea2f4643f25-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.208568 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513c9b58-394d-48dd-a0c9-7ea2f4643f25-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.311533 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/513c9b58-394d-48dd-a0c9-7ea2f4643f25-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.311652 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.311679 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/513c9b58-394d-48dd-a0c9-7ea2f4643f25-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.311712 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513c9b58-394d-48dd-a0c9-7ea2f4643f25-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.311734 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/513c9b58-394d-48dd-a0c9-7ea2f4643f25-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.311770 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j44kg\" (UniqueName: \"kubernetes.io/projected/513c9b58-394d-48dd-a0c9-7ea2f4643f25-kube-api-access-j44kg\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.311790 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/513c9b58-394d-48dd-a0c9-7ea2f4643f25-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.311805 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513c9b58-394d-48dd-a0c9-7ea2f4643f25-config\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.312738 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513c9b58-394d-48dd-a0c9-7ea2f4643f25-config\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.313965 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.315769 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/513c9b58-394d-48dd-a0c9-7ea2f4643f25-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.315993 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/513c9b58-394d-48dd-a0c9-7ea2f4643f25-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.332723 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/513c9b58-394d-48dd-a0c9-7ea2f4643f25-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.332745 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/513c9b58-394d-48dd-a0c9-7ea2f4643f25-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.333362 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513c9b58-394d-48dd-a0c9-7ea2f4643f25-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.335261 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j44kg\" (UniqueName: \"kubernetes.io/projected/513c9b58-394d-48dd-a0c9-7ea2f4643f25-kube-api-access-j44kg\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.349637 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"513c9b58-394d-48dd-a0c9-7ea2f4643f25\") " pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:11 crc kubenswrapper[4698]: I1006 12:01:11.519980 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:14 crc kubenswrapper[4698]: I1006 12:01:14.100039 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.684991 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.685637 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8l7kn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-n57gb_openstack(1944204a-3a3c-4036-a62d-3a0d2b4dbe1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.686946 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" podUID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" Oct 06 12:01:14 crc kubenswrapper[4698]: W1006 12:01:14.736144 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa86326e_abe0_482b_94db_4579c8dfbc66.slice/crio-030c1fb84ed613f6ddcebafdd540e76f2dfeeefd8f2dd57e4c267bfa256cd20c WatchSource:0}: Error finding container 030c1fb84ed613f6ddcebafdd540e76f2dfeeefd8f2dd57e4c267bfa256cd20c: Status 404 returned error can't find the container with id 030c1fb84ed613f6ddcebafdd540e76f2dfeeefd8f2dd57e4c267bfa256cd20c Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.777814 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.778078 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8575b_openstack(b658c53d-baaf-4ecf-974e-b52a817c9eb6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.780138 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" podUID="b658c53d-baaf-4ecf-974e-b52a817c9eb6" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.802931 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.803119 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr872,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-gs8zz_openstack(c4991df9-94ab-4521-9b5e-8621f5a28d48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.804616 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" podUID="c4991df9-94ab-4521-9b5e-8621f5a28d48" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.848729 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.849965 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjjvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-8vjmg_openstack(dbe78f3b-9af0-4900-9f02-13a593444d42): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.851259 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" podUID="dbe78f3b-9af0-4900-9f02-13a593444d42" Oct 06 12:01:14 crc kubenswrapper[4698]: I1006 12:01:14.891983 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fa86326e-abe0-482b-94db-4579c8dfbc66","Type":"ContainerStarted","Data":"030c1fb84ed613f6ddcebafdd540e76f2dfeeefd8f2dd57e4c267bfa256cd20c"} Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.894319 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" podUID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" Oct 06 12:01:14 crc kubenswrapper[4698]: E1006 12:01:14.895457 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" podUID="dbe78f3b-9af0-4900-9f02-13a593444d42" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.398764 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.501444 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.528557 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-config\") pod \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.528806 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-dns-svc\") pod \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.528886 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vv8t\" (UniqueName: \"kubernetes.io/projected/b658c53d-baaf-4ecf-974e-b52a817c9eb6-kube-api-access-2vv8t\") pod \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\" (UID: \"b658c53d-baaf-4ecf-974e-b52a817c9eb6\") " Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.530743 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-config" (OuterVolumeSpecName: "config") pod "b658c53d-baaf-4ecf-974e-b52a817c9eb6" (UID: "b658c53d-baaf-4ecf-974e-b52a817c9eb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.531279 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b658c53d-baaf-4ecf-974e-b52a817c9eb6" (UID: "b658c53d-baaf-4ecf-974e-b52a817c9eb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.550273 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b658c53d-baaf-4ecf-974e-b52a817c9eb6-kube-api-access-2vv8t" (OuterVolumeSpecName: "kube-api-access-2vv8t") pod "b658c53d-baaf-4ecf-974e-b52a817c9eb6" (UID: "b658c53d-baaf-4ecf-974e-b52a817c9eb6"). InnerVolumeSpecName "kube-api-access-2vv8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.633544 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr872\" (UniqueName: \"kubernetes.io/projected/c4991df9-94ab-4521-9b5e-8621f5a28d48-kube-api-access-mr872\") pod \"c4991df9-94ab-4521-9b5e-8621f5a28d48\" (UID: \"c4991df9-94ab-4521-9b5e-8621f5a28d48\") " Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.633739 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4991df9-94ab-4521-9b5e-8621f5a28d48-config\") pod \"c4991df9-94ab-4521-9b5e-8621f5a28d48\" (UID: \"c4991df9-94ab-4521-9b5e-8621f5a28d48\") " Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.634165 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.634176 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vv8t\" (UniqueName: \"kubernetes.io/projected/b658c53d-baaf-4ecf-974e-b52a817c9eb6-kube-api-access-2vv8t\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.634187 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658c53d-baaf-4ecf-974e-b52a817c9eb6-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.634565 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4991df9-94ab-4521-9b5e-8621f5a28d48-config" (OuterVolumeSpecName: "config") pod "c4991df9-94ab-4521-9b5e-8621f5a28d48" (UID: "c4991df9-94ab-4521-9b5e-8621f5a28d48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.643218 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4991df9-94ab-4521-9b5e-8621f5a28d48-kube-api-access-mr872" (OuterVolumeSpecName: "kube-api-access-mr872") pod "c4991df9-94ab-4521-9b5e-8621f5a28d48" (UID: "c4991df9-94ab-4521-9b5e-8621f5a28d48"). InnerVolumeSpecName "kube-api-access-mr872". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.736454 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr872\" (UniqueName: \"kubernetes.io/projected/c4991df9-94ab-4521-9b5e-8621f5a28d48-kube-api-access-mr872\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.737006 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4991df9-94ab-4521-9b5e-8621f5a28d48-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.920817 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" event={"ID":"c4991df9-94ab-4521-9b5e-8621f5a28d48","Type":"ContainerDied","Data":"4742ab108226be35954a350954971c28ee6132311f838a4ffc1be60217d91e47"} Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.920948 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gs8zz" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.937702 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" event={"ID":"b658c53d-baaf-4ecf-974e-b52a817c9eb6","Type":"ContainerDied","Data":"6513d7ee9e8c1cacd47ccbbf0c67db049e94cb924c2bb8dd2d9e0a4a6ae96c75"} Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.937758 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8575b" Oct 06 12:01:16 crc kubenswrapper[4698]: I1006 12:01:16.999463 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gs8zz"] Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.044364 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gs8zz"] Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.074694 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8575b"] Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.076731 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8575b"] Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.118385 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qmjmg"] Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.129593 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 12:01:17 crc kubenswrapper[4698]: W1006 12:01:17.131673 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6df0e48_e5a1_42b9_a3f9_712a00716e38.slice/crio-f42549229b263749f135591b2d34b09f6cd690e71a6fac8febc7a9558a72f205 WatchSource:0}: Error finding container f42549229b263749f135591b2d34b09f6cd690e71a6fac8febc7a9558a72f205: Status 404 returned error can't find the container with id f42549229b263749f135591b2d34b09f6cd690e71a6fac8febc7a9558a72f205 Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.345338 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b658c53d-baaf-4ecf-974e-b52a817c9eb6" path="/var/lib/kubelet/pods/b658c53d-baaf-4ecf-974e-b52a817c9eb6/volumes" Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.345843 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4991df9-94ab-4521-9b5e-8621f5a28d48" path="/var/lib/kubelet/pods/c4991df9-94ab-4521-9b5e-8621f5a28d48/volumes" Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.428699 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.452993 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.555055 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 12:01:17 crc kubenswrapper[4698]: W1006 12:01:17.562964 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod513c9b58_394d_48dd_a0c9_7ea2f4643f25.slice/crio-20c624f196461ec8167d0cca5b00e8c58a0881763d4e55173902ef9a9367573c WatchSource:0}: Error finding container 20c624f196461ec8167d0cca5b00e8c58a0881763d4e55173902ef9a9367573c: Status 404 returned error can't find the container with id 20c624f196461ec8167d0cca5b00e8c58a0881763d4e55173902ef9a9367573c Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.654579 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gx9kq"] Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.694066 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 06 12:01:17 crc kubenswrapper[4698]: W1006 12:01:17.704361 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa023504_b5d3_415a_a98c_8771aac74c06.slice/crio-14c341ceb022ec1b434be040dbc43da4dc298e4df84dad0f66c08a4d58982821 WatchSource:0}: Error finding container 14c341ceb022ec1b434be040dbc43da4dc298e4df84dad0f66c08a4d58982821: Status 404 returned error can't find the container with id 14c341ceb022ec1b434be040dbc43da4dc298e4df84dad0f66c08a4d58982821 Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.752978 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:01:17 crc kubenswrapper[4698]: W1006 12:01:17.765712 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaca88314_f6aa_4d15_8c81_2a4c66d4297f.slice/crio-52e72b787168ab03161ebe0da87bb1efe2a1a61507ffccb51a57527eec67d928 WatchSource:0}: Error finding container 52e72b787168ab03161ebe0da87bb1efe2a1a61507ffccb51a57527eec67d928: Status 404 returned error can't find the container with id 52e72b787168ab03161ebe0da87bb1efe2a1a61507ffccb51a57527eec67d928 Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.797050 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 12:01:17 crc kubenswrapper[4698]: W1006 12:01:17.798863 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3024f021_f705_443b_a7e1_bcb574c25fe7.slice/crio-a5baae951aa8b8cc284e9498d55a95b57559b3b7fb522c488b5fada16b439b72 WatchSource:0}: Error finding container a5baae951aa8b8cc284e9498d55a95b57559b3b7fb522c488b5fada16b439b72: Status 404 returned error can't find the container with id a5baae951aa8b8cc284e9498d55a95b57559b3b7fb522c488b5fada16b439b72 Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.947891 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qmjmg" event={"ID":"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23","Type":"ContainerStarted","Data":"8eb000c5955f3a90fb216ec04398bafe214c8991ce9134af8dcc7a83056fe2f9"} Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.950647 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aca88314-f6aa-4d15-8c81-2a4c66d4297f","Type":"ContainerStarted","Data":"52e72b787168ab03161ebe0da87bb1efe2a1a61507ffccb51a57527eec67d928"} Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.952944 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3024f021-f705-443b-a7e1-bcb574c25fe7","Type":"ContainerStarted","Data":"a5baae951aa8b8cc284e9498d55a95b57559b3b7fb522c488b5fada16b439b72"} Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.954711 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gx9kq" event={"ID":"802f85d7-83b9-4361-ae5e-72d826586a43","Type":"ContainerStarted","Data":"99bfa5dba76b8d3aac38507bdc77ea8c8a236f7d5a112ccb6b3fbae804dad306"} Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.956456 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"258084da-8b4a-484d-b10a-0511f89cac2f","Type":"ContainerStarted","Data":"617f83fee5d389141e2392d272b076d68df66f74caacee931d1f202774cffc52"} Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.959211 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fa023504-b5d3-415a-a98c-8771aac74c06","Type":"ContainerStarted","Data":"14c341ceb022ec1b434be040dbc43da4dc298e4df84dad0f66c08a4d58982821"} Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.961466 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4815e17b-a929-4914-91e6-6e9b3ef94561","Type":"ContainerStarted","Data":"b89da9be782cc566d3e793a532e4d5c223dce72daccc95dba515bd37d0578566"} Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.963235 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"513c9b58-394d-48dd-a0c9-7ea2f4643f25","Type":"ContainerStarted","Data":"20c624f196461ec8167d0cca5b00e8c58a0881763d4e55173902ef9a9367573c"} Oct 06 12:01:17 crc kubenswrapper[4698]: I1006 12:01:17.965586 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b6df0e48-e5a1-42b9-a3f9-712a00716e38","Type":"ContainerStarted","Data":"f42549229b263749f135591b2d34b09f6cd690e71a6fac8febc7a9558a72f205"} Oct 06 12:01:18 crc kubenswrapper[4698]: I1006 12:01:18.983953 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4815e17b-a929-4914-91e6-6e9b3ef94561","Type":"ContainerStarted","Data":"0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505"} Oct 06 12:01:18 crc kubenswrapper[4698]: I1006 12:01:18.987991 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90c98585-3fd3-42cb-b011-01ecd1227057","Type":"ContainerStarted","Data":"e08d2a623d5b99981a53f1fd0656087540b5036f8e39b6556f7d21bc8e446234"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.083120 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gx9kq" event={"ID":"802f85d7-83b9-4361-ae5e-72d826586a43","Type":"ContainerStarted","Data":"b65b5a09d4820db2baf9bf1930860c27570e633429c3b6d8e470d72637e62bae"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.084785 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b6df0e48-e5a1-42b9-a3f9-712a00716e38","Type":"ContainerStarted","Data":"f8ac3fb40eac544df1ac4af98ad97344b9561b4f99ea87cab70a8c5e576fef39"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.086789 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fa023504-b5d3-415a-a98c-8771aac74c06","Type":"ContainerStarted","Data":"0d5b749be407372f47b4634e2ff19c7429002e3e0c42ac4ae0553fd581d9ad6c"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.086949 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.090948 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fa86326e-abe0-482b-94db-4579c8dfbc66","Type":"ContainerStarted","Data":"13f12a7b6a1d5ccb4b54b0a2509bea86403465e273619145abed174fdaa3a451"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.092926 4698 generic.go:334] "Generic (PLEG): container finished" podID="dbe78f3b-9af0-4900-9f02-13a593444d42" containerID="068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e" exitCode=0 Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.092977 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" event={"ID":"dbe78f3b-9af0-4900-9f02-13a593444d42","Type":"ContainerDied","Data":"068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.095205 4698 generic.go:334] "Generic (PLEG): container finished" podID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" containerID="0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074" exitCode=0 Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.095254 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" event={"ID":"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c","Type":"ContainerDied","Data":"0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.097915 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3024f021-f705-443b-a7e1-bcb574c25fe7","Type":"ContainerStarted","Data":"cb635c4d484126180ab59a8b55957707e3697cfd2d059813638c081e8e7f60c5"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.108438 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"513c9b58-394d-48dd-a0c9-7ea2f4643f25","Type":"ContainerStarted","Data":"f081d2c11be3036652072056ca1dbaffe0b70849c8bf80cbf2075ccc57003e19"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.114262 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"258084da-8b4a-484d-b10a-0511f89cac2f","Type":"ContainerStarted","Data":"825a8b03b342877d9c3baf904629d67aa6091402b56058dc43629fba007a7eb2"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.114362 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.124820 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qmjmg" event={"ID":"7dd3b0e2-4d06-4c91-8539-4db08c7f2d23","Type":"ContainerStarted","Data":"254dd0dd318cca681e6775800e4569d9be653172d930a844464f0bcc5005c02c"} Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.125609 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qmjmg" Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.265678 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.884060009 podStartE2EDuration="26.265656381s" podCreationTimestamp="2025-10-06 12:01:02 +0000 UTC" firstStartedPulling="2025-10-06 12:01:17.707225006 +0000 UTC m=+965.119917179" lastFinishedPulling="2025-10-06 12:01:26.088821378 +0000 UTC m=+973.501513551" observedRunningTime="2025-10-06 12:01:28.244916066 +0000 UTC m=+975.657608239" watchObservedRunningTime="2025-10-06 12:01:28.265656381 +0000 UTC m=+975.678348554" Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.295812 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.723169975 podStartE2EDuration="24.295786399s" podCreationTimestamp="2025-10-06 12:01:04 +0000 UTC" firstStartedPulling="2025-10-06 12:01:17.468661839 +0000 UTC m=+964.881354012" lastFinishedPulling="2025-10-06 12:01:27.041278253 +0000 UTC m=+974.453970436" observedRunningTime="2025-10-06 12:01:28.264280087 +0000 UTC m=+975.676972270" watchObservedRunningTime="2025-10-06 12:01:28.295786399 +0000 UTC m=+975.708478572" Oct 06 12:01:28 crc kubenswrapper[4698]: I1006 12:01:28.307252 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qmjmg" podStartSLOduration=11.164718553 podStartE2EDuration="20.307235462s" podCreationTimestamp="2025-10-06 12:01:08 +0000 UTC" firstStartedPulling="2025-10-06 12:01:17.131297658 +0000 UTC m=+964.543989831" lastFinishedPulling="2025-10-06 12:01:26.273814557 +0000 UTC m=+973.686506740" observedRunningTime="2025-10-06 12:01:28.290764824 +0000 UTC m=+975.703456997" watchObservedRunningTime="2025-10-06 12:01:28.307235462 +0000 UTC m=+975.719927635" Oct 06 12:01:29 crc kubenswrapper[4698]: I1006 12:01:29.138165 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" event={"ID":"dbe78f3b-9af0-4900-9f02-13a593444d42","Type":"ContainerStarted","Data":"37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b"} Oct 06 12:01:29 crc kubenswrapper[4698]: I1006 12:01:29.139512 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:01:29 crc kubenswrapper[4698]: I1006 12:01:29.144764 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" event={"ID":"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c","Type":"ContainerStarted","Data":"6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df"} Oct 06 12:01:29 crc kubenswrapper[4698]: I1006 12:01:29.145784 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:01:29 crc kubenswrapper[4698]: I1006 12:01:29.148930 4698 generic.go:334] "Generic (PLEG): container finished" podID="802f85d7-83b9-4361-ae5e-72d826586a43" containerID="b65b5a09d4820db2baf9bf1930860c27570e633429c3b6d8e470d72637e62bae" exitCode=0 Oct 06 12:01:29 crc kubenswrapper[4698]: I1006 12:01:29.150282 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gx9kq" event={"ID":"802f85d7-83b9-4361-ae5e-72d826586a43","Type":"ContainerDied","Data":"b65b5a09d4820db2baf9bf1930860c27570e633429c3b6d8e470d72637e62bae"} Oct 06 12:01:29 crc kubenswrapper[4698]: I1006 12:01:29.168702 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" podStartSLOduration=-9223372004.686094 podStartE2EDuration="32.168680939s" podCreationTimestamp="2025-10-06 12:00:57 +0000 UTC" firstStartedPulling="2025-10-06 12:00:58.622087463 +0000 UTC m=+946.034779636" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:01:29.163921951 +0000 UTC m=+976.576614134" watchObservedRunningTime="2025-10-06 12:01:29.168680939 +0000 UTC m=+976.581373112" Oct 06 12:01:29 crc kubenswrapper[4698]: I1006 12:01:29.190541 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" podStartSLOduration=3.526932982 podStartE2EDuration="32.190521121s" podCreationTimestamp="2025-10-06 12:00:57 +0000 UTC" firstStartedPulling="2025-10-06 12:00:58.595836604 +0000 UTC m=+946.008528777" lastFinishedPulling="2025-10-06 12:01:27.259424733 +0000 UTC m=+974.672116916" observedRunningTime="2025-10-06 12:01:29.184674105 +0000 UTC m=+976.597366278" watchObservedRunningTime="2025-10-06 12:01:29.190521121 +0000 UTC m=+976.603213294" Oct 06 12:01:30 crc kubenswrapper[4698]: I1006 12:01:30.165032 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gx9kq" event={"ID":"802f85d7-83b9-4361-ae5e-72d826586a43","Type":"ContainerStarted","Data":"6d22b580c6c97798f73ac633b39e36eafcef8e94170d51c9372c3eead5fe82f5"} Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.175710 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aca88314-f6aa-4d15-8c81-2a4c66d4297f","Type":"ContainerStarted","Data":"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b"} Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.180653 4698 generic.go:334] "Generic (PLEG): container finished" podID="fa86326e-abe0-482b-94db-4579c8dfbc66" containerID="13f12a7b6a1d5ccb4b54b0a2509bea86403465e273619145abed174fdaa3a451" exitCode=0 Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.180734 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fa86326e-abe0-482b-94db-4579c8dfbc66","Type":"ContainerDied","Data":"13f12a7b6a1d5ccb4b54b0a2509bea86403465e273619145abed174fdaa3a451"} Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.782118 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4ktjh"] Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.783815 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.822392 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.822713 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4ktjh"] Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.932193 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d17b7b-03e7-4379-9c64-57d50be1882c-combined-ca-bundle\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.932261 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d17b7b-03e7-4379-9c64-57d50be1882c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.932295 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7d17b7b-03e7-4379-9c64-57d50be1882c-ovn-rundir\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.932317 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbp2h\" (UniqueName: \"kubernetes.io/projected/f7d17b7b-03e7-4379-9c64-57d50be1882c-kube-api-access-fbp2h\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.932422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d17b7b-03e7-4379-9c64-57d50be1882c-config\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:31 crc kubenswrapper[4698]: I1006 12:01:31.932478 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7d17b7b-03e7-4379-9c64-57d50be1882c-ovs-rundir\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.034269 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7d17b7b-03e7-4379-9c64-57d50be1882c-ovs-rundir\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.034351 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d17b7b-03e7-4379-9c64-57d50be1882c-combined-ca-bundle\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.034393 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d17b7b-03e7-4379-9c64-57d50be1882c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.034417 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7d17b7b-03e7-4379-9c64-57d50be1882c-ovn-rundir\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.034438 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbp2h\" (UniqueName: \"kubernetes.io/projected/f7d17b7b-03e7-4379-9c64-57d50be1882c-kube-api-access-fbp2h\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.034482 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d17b7b-03e7-4379-9c64-57d50be1882c-config\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.035259 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d17b7b-03e7-4379-9c64-57d50be1882c-config\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.035548 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7d17b7b-03e7-4379-9c64-57d50be1882c-ovn-rundir\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.035865 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7d17b7b-03e7-4379-9c64-57d50be1882c-ovs-rundir\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.042003 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d17b7b-03e7-4379-9c64-57d50be1882c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.044273 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d17b7b-03e7-4379-9c64-57d50be1882c-combined-ca-bundle\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.073538 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbp2h\" (UniqueName: \"kubernetes.io/projected/f7d17b7b-03e7-4379-9c64-57d50be1882c-kube-api-access-fbp2h\") pod \"ovn-controller-metrics-4ktjh\" (UID: \"f7d17b7b-03e7-4379-9c64-57d50be1882c\") " pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.111779 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4ktjh" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.207583 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3024f021-f705-443b-a7e1-bcb574c25fe7","Type":"ContainerStarted","Data":"c5fe1b1efd8fe48ba3df85855b0543d955a261e9f86d3fe9a065970de86a9380"} Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.222797 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.224589 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"513c9b58-394d-48dd-a0c9-7ea2f4643f25","Type":"ContainerStarted","Data":"d8271113e65a31923c18b59dc27b2812ee08f4ded5eb7b598bbf07fed1050c6d"} Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.241617 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gx9kq" event={"ID":"802f85d7-83b9-4361-ae5e-72d826586a43","Type":"ContainerStarted","Data":"fbc8bf870904bf40f8394697901d839130889740dacddcb54a7ebf200923645f"} Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.241732 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.242007 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.248413 4698 generic.go:334] "Generic (PLEG): container finished" podID="b6df0e48-e5a1-42b9-a3f9-712a00716e38" containerID="f8ac3fb40eac544df1ac4af98ad97344b9561b4f99ea87cab70a8c5e576fef39" exitCode=0 Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.248515 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b6df0e48-e5a1-42b9-a3f9-712a00716e38","Type":"ContainerDied","Data":"f8ac3fb40eac544df1ac4af98ad97344b9561b4f99ea87cab70a8c5e576fef39"} Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.248624 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.831699768 podStartE2EDuration="26.248610501s" podCreationTimestamp="2025-10-06 12:01:06 +0000 UTC" firstStartedPulling="2025-10-06 12:01:17.801382652 +0000 UTC m=+965.214074825" lastFinishedPulling="2025-10-06 12:01:31.218293385 +0000 UTC m=+978.630985558" observedRunningTime="2025-10-06 12:01:32.241939335 +0000 UTC m=+979.654631528" watchObservedRunningTime="2025-10-06 12:01:32.248610501 +0000 UTC m=+979.661302674" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.262153 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fa86326e-abe0-482b-94db-4579c8dfbc66","Type":"ContainerStarted","Data":"b8e6600769a1b76f4ec635bc0f6207e7a3575131be0b3dfe5c8c02aedfa6cc1a"} Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.345555 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.390660 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8vjmg"] Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.391451 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" podUID="dbe78f3b-9af0-4900-9f02-13a593444d42" containerName="dnsmasq-dns" containerID="cri-o://37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b" gracePeriod=10 Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.437699 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gx9kq" podStartSLOduration=15.213994482 podStartE2EDuration="24.43767367s" podCreationTimestamp="2025-10-06 12:01:08 +0000 UTC" firstStartedPulling="2025-10-06 12:01:17.679901769 +0000 UTC m=+965.092593942" lastFinishedPulling="2025-10-06 12:01:26.903580947 +0000 UTC m=+974.316273130" observedRunningTime="2025-10-06 12:01:32.330411389 +0000 UTC m=+979.743103562" watchObservedRunningTime="2025-10-06 12:01:32.43767367 +0000 UTC m=+979.850365843" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.502432 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-28x2s"] Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.503997 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.506570 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.523509 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.528522 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-28x2s"] Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.544681 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.922970923 podStartE2EDuration="22.544639763s" podCreationTimestamp="2025-10-06 12:01:10 +0000 UTC" firstStartedPulling="2025-10-06 12:01:17.565851892 +0000 UTC m=+964.978544065" lastFinishedPulling="2025-10-06 12:01:31.187520732 +0000 UTC m=+978.600212905" observedRunningTime="2025-10-06 12:01:32.387058655 +0000 UTC m=+979.799750928" watchObservedRunningTime="2025-10-06 12:01:32.544639763 +0000 UTC m=+979.957331936" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.556353 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.203025478 podStartE2EDuration="32.556314033s" podCreationTimestamp="2025-10-06 12:01:00 +0000 UTC" firstStartedPulling="2025-10-06 12:01:14.739763658 +0000 UTC m=+962.152455831" lastFinishedPulling="2025-10-06 12:01:26.093052173 +0000 UTC m=+973.505744386" observedRunningTime="2025-10-06 12:01:32.420434892 +0000 UTC m=+979.833127075" watchObservedRunningTime="2025-10-06 12:01:32.556314033 +0000 UTC m=+979.969006206" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.572889 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr66q\" (UniqueName: \"kubernetes.io/projected/d5de6634-2580-44af-b395-ef15843ddf4a-kube-api-access-dr66q\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.573007 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.573089 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.573143 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-config\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.582732 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.608334 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-n57gb"] Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.608671 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" podUID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" containerName="dnsmasq-dns" containerID="cri-o://6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df" gracePeriod=10 Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.631737 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6h4ch"] Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.640799 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6h4ch"] Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.640970 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.643026 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.650947 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.677085 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr66q\" (UniqueName: \"kubernetes.io/projected/d5de6634-2580-44af-b395-ef15843ddf4a-kube-api-access-dr66q\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.677596 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.677746 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.677835 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-config\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.678615 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.678850 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.717739 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-config\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.723556 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr66q\" (UniqueName: \"kubernetes.io/projected/d5de6634-2580-44af-b395-ef15843ddf4a-kube-api-access-dr66q\") pod \"dnsmasq-dns-7fd796d7df-28x2s\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.818207 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.818247 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.818328 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.818357 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hdmb\" (UniqueName: \"kubernetes.io/projected/b2ab81b0-ecac-42e4-a174-068580a0feb1-kube-api-access-8hdmb\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.818417 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-config\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.840173 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4ktjh"] Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.919798 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-config\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.920097 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.920284 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.921775 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.921815 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hdmb\" (UniqueName: \"kubernetes.io/projected/b2ab81b0-ecac-42e4-a174-068580a0feb1-kube-api-access-8hdmb\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.922060 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.922455 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.922606 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.923692 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-config\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.933148 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.942255 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hdmb\" (UniqueName: \"kubernetes.io/projected/b2ab81b0-ecac-42e4-a174-068580a0feb1-kube-api-access-8hdmb\") pod \"dnsmasq-dns-86db49b7ff-6h4ch\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:32 crc kubenswrapper[4698]: I1006 12:01:32.977241 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.083523 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.144324 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.221103 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.232892 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjjvz\" (UniqueName: \"kubernetes.io/projected/dbe78f3b-9af0-4900-9f02-13a593444d42-kube-api-access-zjjvz\") pod \"dbe78f3b-9af0-4900-9f02-13a593444d42\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.232971 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-dns-svc\") pod \"dbe78f3b-9af0-4900-9f02-13a593444d42\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.233001 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-config\") pod \"dbe78f3b-9af0-4900-9f02-13a593444d42\" (UID: \"dbe78f3b-9af0-4900-9f02-13a593444d42\") " Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.240526 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe78f3b-9af0-4900-9f02-13a593444d42-kube-api-access-zjjvz" (OuterVolumeSpecName: "kube-api-access-zjjvz") pod "dbe78f3b-9af0-4900-9f02-13a593444d42" (UID: "dbe78f3b-9af0-4900-9f02-13a593444d42"). InnerVolumeSpecName "kube-api-access-zjjvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.280336 4698 generic.go:334] "Generic (PLEG): container finished" podID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" containerID="6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df" exitCode=0 Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.280415 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" event={"ID":"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c","Type":"ContainerDied","Data":"6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df"} Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.280447 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" event={"ID":"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c","Type":"ContainerDied","Data":"1e608e1f621b5591f06ecac76f01c27f3839c1f689620f861bb31fd0f78af74a"} Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.280467 4698 scope.go:117] "RemoveContainer" containerID="6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.280517 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-n57gb" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.292000 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-config" (OuterVolumeSpecName: "config") pod "dbe78f3b-9af0-4900-9f02-13a593444d42" (UID: "dbe78f3b-9af0-4900-9f02-13a593444d42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.294520 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4ktjh" event={"ID":"f7d17b7b-03e7-4379-9c64-57d50be1882c","Type":"ContainerStarted","Data":"581fb32f5f1bc4dac8e6c27ca61d6e8ca4e24e74d194d0cf4348c9181d81683f"} Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.294550 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4ktjh" event={"ID":"f7d17b7b-03e7-4379-9c64-57d50be1882c","Type":"ContainerStarted","Data":"a395006f7563620cad42c2eb4636f1af3ec14a845f6b1e74848a35f62e9d0952"} Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.324096 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b6df0e48-e5a1-42b9-a3f9-712a00716e38","Type":"ContainerStarted","Data":"80cafb3a4c1885b2665c6e15e9cfd6cb0dc6f13af0ff0224dd81f7a5791326dc"} Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.333717 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-config\") pod \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.333817 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-dns-svc\") pod \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.333967 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l7kn\" (UniqueName: \"kubernetes.io/projected/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-kube-api-access-8l7kn\") pod \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\" (UID: \"1944204a-3a3c-4036-a62d-3a0d2b4dbe1c\") " Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.334414 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjjvz\" (UniqueName: \"kubernetes.io/projected/dbe78f3b-9af0-4900-9f02-13a593444d42-kube-api-access-zjjvz\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.334426 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.338458 4698 generic.go:334] "Generic (PLEG): container finished" podID="dbe78f3b-9af0-4900-9f02-13a593444d42" containerID="37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b" exitCode=0 Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.339231 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.341672 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-kube-api-access-8l7kn" (OuterVolumeSpecName: "kube-api-access-8l7kn") pod "1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" (UID: "1944204a-3a3c-4036-a62d-3a0d2b4dbe1c"). InnerVolumeSpecName "kube-api-access-8l7kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.345191 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbe78f3b-9af0-4900-9f02-13a593444d42" (UID: "dbe78f3b-9af0-4900-9f02-13a593444d42"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.372915 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4ktjh" podStartSLOduration=2.372881746 podStartE2EDuration="2.372881746s" podCreationTimestamp="2025-10-06 12:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:01:33.333097309 +0000 UTC m=+980.745789482" watchObservedRunningTime="2025-10-06 12:01:33.372881746 +0000 UTC m=+980.785573919" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.373202 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.954380303 podStartE2EDuration="33.373194494s" podCreationTimestamp="2025-10-06 12:01:00 +0000 UTC" firstStartedPulling="2025-10-06 12:01:17.134615081 +0000 UTC m=+964.547307254" lastFinishedPulling="2025-10-06 12:01:26.553429262 +0000 UTC m=+973.966121445" observedRunningTime="2025-10-06 12:01:33.360397546 +0000 UTC m=+980.773089719" watchObservedRunningTime="2025-10-06 12:01:33.373194494 +0000 UTC m=+980.785886667" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.411382 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" (UID: "1944204a-3a3c-4036-a62d-3a0d2b4dbe1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.425805 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6h4ch"] Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.425855 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" event={"ID":"dbe78f3b-9af0-4900-9f02-13a593444d42","Type":"ContainerDied","Data":"37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b"} Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.425894 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.425938 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.425963 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.425974 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8vjmg" event={"ID":"dbe78f3b-9af0-4900-9f02-13a593444d42","Type":"ContainerDied","Data":"344b865657ef33b68829289c4b3b2ad284f1accd69cbea3f5f6485fd35e7492b"} Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.436003 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l7kn\" (UniqueName: \"kubernetes.io/projected/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-kube-api-access-8l7kn\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.436054 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe78f3b-9af0-4900-9f02-13a593444d42-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.436068 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.446337 4698 scope.go:117] "RemoveContainer" containerID="0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.471003 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-config" (OuterVolumeSpecName: "config") pod "1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" (UID: "1944204a-3a3c-4036-a62d-3a0d2b4dbe1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.503928 4698 scope.go:117] "RemoveContainer" containerID="6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df" Oct 06 12:01:33 crc kubenswrapper[4698]: E1006 12:01:33.504651 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df\": container with ID starting with 6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df not found: ID does not exist" containerID="6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.504685 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df"} err="failed to get container status \"6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df\": rpc error: code = NotFound desc = could not find container \"6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df\": container with ID starting with 6ff872946135153f3fce2587fb9b16771423fdea015fc5dd1279cbe1ceae69df not found: ID does not exist" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.504720 4698 scope.go:117] "RemoveContainer" containerID="0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074" Oct 06 12:01:33 crc kubenswrapper[4698]: E1006 12:01:33.504947 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074\": container with ID starting with 0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074 not found: ID does not exist" containerID="0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.504974 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074"} err="failed to get container status \"0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074\": rpc error: code = NotFound desc = could not find container \"0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074\": container with ID starting with 0ece5b18727499951cf3b9d68509382a7dc4b143886e8200934311993479e074 not found: ID does not exist" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.504988 4698 scope.go:117] "RemoveContainer" containerID="37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.522430 4698 scope.go:117] "RemoveContainer" containerID="068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.555157 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-28x2s"] Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.563850 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.630538 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-n57gb"] Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.637506 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-n57gb"] Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.649809 4698 scope.go:117] "RemoveContainer" containerID="37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b" Oct 06 12:01:33 crc kubenswrapper[4698]: E1006 12:01:33.651064 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b\": container with ID starting with 37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b not found: ID does not exist" containerID="37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.651177 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b"} err="failed to get container status \"37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b\": rpc error: code = NotFound desc = could not find container \"37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b\": container with ID starting with 37a16b51cb5e175ee305bfb0edc087cde38f1c53d935ab205c5e91fd01ef077b not found: ID does not exist" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.651260 4698 scope.go:117] "RemoveContainer" containerID="068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e" Oct 06 12:01:33 crc kubenswrapper[4698]: E1006 12:01:33.658393 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e\": container with ID starting with 068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e not found: ID does not exist" containerID="068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.658448 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e"} err="failed to get container status \"068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e\": rpc error: code = NotFound desc = could not find container \"068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e\": container with ID starting with 068b4302c90bf51f905b41ab465b1088048835d8dd567967e2d9b37042ebb67e not found: ID does not exist" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.666461 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8vjmg"] Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.672986 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8vjmg"] Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.960770 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 06 12:01:33 crc kubenswrapper[4698]: E1006 12:01:33.961214 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe78f3b-9af0-4900-9f02-13a593444d42" containerName="dnsmasq-dns" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.961230 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe78f3b-9af0-4900-9f02-13a593444d42" containerName="dnsmasq-dns" Oct 06 12:01:33 crc kubenswrapper[4698]: E1006 12:01:33.961245 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" containerName="dnsmasq-dns" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.961253 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" containerName="dnsmasq-dns" Oct 06 12:01:33 crc kubenswrapper[4698]: E1006 12:01:33.961281 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" containerName="init" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.961290 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" containerName="init" Oct 06 12:01:33 crc kubenswrapper[4698]: E1006 12:01:33.961303 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe78f3b-9af0-4900-9f02-13a593444d42" containerName="init" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.961309 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe78f3b-9af0-4900-9f02-13a593444d42" containerName="init" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.961484 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" containerName="dnsmasq-dns" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.961506 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe78f3b-9af0-4900-9f02-13a593444d42" containerName="dnsmasq-dns" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.963612 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.969047 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ztwj4" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.969190 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.969417 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.970782 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 06 12:01:33 crc kubenswrapper[4698]: I1006 12:01:33.985106 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.077906 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/711c60fb-212e-45d1-87c3-c15a97c60f90-config\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.077961 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711c60fb-212e-45d1-87c3-c15a97c60f90-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.078213 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzdk\" (UniqueName: \"kubernetes.io/projected/711c60fb-212e-45d1-87c3-c15a97c60f90-kube-api-access-bnzdk\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.078234 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/711c60fb-212e-45d1-87c3-c15a97c60f90-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.078268 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/711c60fb-212e-45d1-87c3-c15a97c60f90-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.078321 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/711c60fb-212e-45d1-87c3-c15a97c60f90-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.078363 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/711c60fb-212e-45d1-87c3-c15a97c60f90-scripts\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.180125 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/711c60fb-212e-45d1-87c3-c15a97c60f90-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.180196 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/711c60fb-212e-45d1-87c3-c15a97c60f90-scripts\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.180248 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/711c60fb-212e-45d1-87c3-c15a97c60f90-config\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.180268 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711c60fb-212e-45d1-87c3-c15a97c60f90-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.180302 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnzdk\" (UniqueName: \"kubernetes.io/projected/711c60fb-212e-45d1-87c3-c15a97c60f90-kube-api-access-bnzdk\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.180325 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/711c60fb-212e-45d1-87c3-c15a97c60f90-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.180353 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/711c60fb-212e-45d1-87c3-c15a97c60f90-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.181260 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/711c60fb-212e-45d1-87c3-c15a97c60f90-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.181464 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/711c60fb-212e-45d1-87c3-c15a97c60f90-config\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.182132 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/711c60fb-212e-45d1-87c3-c15a97c60f90-scripts\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.185249 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711c60fb-212e-45d1-87c3-c15a97c60f90-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.186758 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/711c60fb-212e-45d1-87c3-c15a97c60f90-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.188579 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/711c60fb-212e-45d1-87c3-c15a97c60f90-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.204028 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnzdk\" (UniqueName: \"kubernetes.io/projected/711c60fb-212e-45d1-87c3-c15a97c60f90-kube-api-access-bnzdk\") pod \"ovn-northd-0\" (UID: \"711c60fb-212e-45d1-87c3-c15a97c60f90\") " pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.299149 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.388761 4698 generic.go:334] "Generic (PLEG): container finished" podID="b2ab81b0-ecac-42e4-a174-068580a0feb1" containerID="d1b584c909fa54d92cdaee5447bd1cf40c35f6431710db3d35449f20d08f1319" exitCode=0 Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.389282 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" event={"ID":"b2ab81b0-ecac-42e4-a174-068580a0feb1","Type":"ContainerDied","Data":"d1b584c909fa54d92cdaee5447bd1cf40c35f6431710db3d35449f20d08f1319"} Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.389317 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" event={"ID":"b2ab81b0-ecac-42e4-a174-068580a0feb1","Type":"ContainerStarted","Data":"28be1ff6693f2e9a545f1d28b1c9949d869781b72ae0d267f79030648ba38584"} Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.394267 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.422899 4698 generic.go:334] "Generic (PLEG): container finished" podID="d5de6634-2580-44af-b395-ef15843ddf4a" containerID="ad861c454211d6680f27d49b23ca4735832fdbff312cf89f9182c8e4ed2c717f" exitCode=0 Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.423295 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" event={"ID":"d5de6634-2580-44af-b395-ef15843ddf4a","Type":"ContainerDied","Data":"ad861c454211d6680f27d49b23ca4735832fdbff312cf89f9182c8e4ed2c717f"} Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.423393 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" event={"ID":"d5de6634-2580-44af-b395-ef15843ddf4a","Type":"ContainerStarted","Data":"de35478ebd6f479cf0f3343a0c2c52f727b38d67796320bbf907ff2dd6ab917b"} Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.522503 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-28x2s"] Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.601433 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-fxtdm"] Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.606624 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.624310 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fxtdm"] Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.692352 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.692737 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.692763 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-dns-svc\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.692801 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4672r\" (UniqueName: \"kubernetes.io/projected/7730f6e1-8a03-463f-90d6-41d706536495-kube-api-access-4672r\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.692830 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-config\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.794309 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-dns-svc\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.794406 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4672r\" (UniqueName: \"kubernetes.io/projected/7730f6e1-8a03-463f-90d6-41d706536495-kube-api-access-4672r\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.794449 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-config\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.794506 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.794603 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.795547 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.796220 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-config\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.797706 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.797781 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-dns-svc\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: E1006 12:01:34.806434 4698 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 06 12:01:34 crc kubenswrapper[4698]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d5de6634-2580-44af-b395-ef15843ddf4a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 06 12:01:34 crc kubenswrapper[4698]: > podSandboxID="de35478ebd6f479cf0f3343a0c2c52f727b38d67796320bbf907ff2dd6ab917b" Oct 06 12:01:34 crc kubenswrapper[4698]: E1006 12:01:34.806600 4698 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 06 12:01:34 crc kubenswrapper[4698]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bfh5d7h8hd8h664h564hfbh5d4h5f5h55h5fch66h675hb8h65bh64dhbh5dchc9h66fh5dbhf4h658h64ch55bhbh65h55dh597h68dh579hbdq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr66q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7fd796d7df-28x2s_openstack(d5de6634-2580-44af-b395-ef15843ddf4a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d5de6634-2580-44af-b395-ef15843ddf4a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 06 12:01:34 crc kubenswrapper[4698]: > logger="UnhandledError" Oct 06 12:01:34 crc kubenswrapper[4698]: E1006 12:01:34.809272 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d5de6634-2580-44af-b395-ef15843ddf4a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" podUID="d5de6634-2580-44af-b395-ef15843ddf4a" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.815789 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4672r\" (UniqueName: \"kubernetes.io/projected/7730f6e1-8a03-463f-90d6-41d706536495-kube-api-access-4672r\") pod \"dnsmasq-dns-698758b865-fxtdm\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: E1006 12:01:34.821257 4698 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 06 12:01:34 crc kubenswrapper[4698]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b2ab81b0-ecac-42e4-a174-068580a0feb1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 06 12:01:34 crc kubenswrapper[4698]: > podSandboxID="28be1ff6693f2e9a545f1d28b1c9949d869781b72ae0d267f79030648ba38584" Oct 06 12:01:34 crc kubenswrapper[4698]: E1006 12:01:34.821440 4698 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 06 12:01:34 crc kubenswrapper[4698]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hdmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-6h4ch_openstack(b2ab81b0-ecac-42e4-a174-068580a0feb1): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b2ab81b0-ecac-42e4-a174-068580a0feb1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 06 12:01:34 crc kubenswrapper[4698]: > logger="UnhandledError" Oct 06 12:01:34 crc kubenswrapper[4698]: E1006 12:01:34.822828 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b2ab81b0-ecac-42e4-a174-068580a0feb1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" podUID="b2ab81b0-ecac-42e4-a174-068580a0feb1" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.955030 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:34 crc kubenswrapper[4698]: I1006 12:01:34.998914 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 06 12:01:35 crc kubenswrapper[4698]: W1006 12:01:35.013982 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod711c60fb_212e_45d1_87c3_c15a97c60f90.slice/crio-f0b18d6b1bda32f81d0fdfc7556104db20f534423c95999ff1ec5a65581cf999 WatchSource:0}: Error finding container f0b18d6b1bda32f81d0fdfc7556104db20f534423c95999ff1ec5a65581cf999: Status 404 returned error can't find the container with id f0b18d6b1bda32f81d0fdfc7556104db20f534423c95999ff1ec5a65581cf999 Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.343403 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1944204a-3a3c-4036-a62d-3a0d2b4dbe1c" path="/var/lib/kubelet/pods/1944204a-3a3c-4036-a62d-3a0d2b4dbe1c/volumes" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.344618 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe78f3b-9af0-4900-9f02-13a593444d42" path="/var/lib/kubelet/pods/dbe78f3b-9af0-4900-9f02-13a593444d42/volumes" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.434574 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"711c60fb-212e-45d1-87c3-c15a97c60f90","Type":"ContainerStarted","Data":"f0b18d6b1bda32f81d0fdfc7556104db20f534423c95999ff1ec5a65581cf999"} Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.456874 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fxtdm"] Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.655321 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.672531 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.675782 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.676029 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.676275 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ff688" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.676995 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.683676 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.813992 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/240ac959-0487-47d4-b219-7741b2127f50-lock\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.814068 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.814144 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8tf\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-kube-api-access-2p8tf\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.814184 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/240ac959-0487-47d4-b219-7741b2127f50-cache\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.814207 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.821283 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.915510 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-ovsdbserver-nb\") pod \"d5de6634-2580-44af-b395-ef15843ddf4a\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.915622 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-dns-svc\") pod \"d5de6634-2580-44af-b395-ef15843ddf4a\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.915787 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-config\") pod \"d5de6634-2580-44af-b395-ef15843ddf4a\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.915811 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr66q\" (UniqueName: \"kubernetes.io/projected/d5de6634-2580-44af-b395-ef15843ddf4a-kube-api-access-dr66q\") pod \"d5de6634-2580-44af-b395-ef15843ddf4a\" (UID: \"d5de6634-2580-44af-b395-ef15843ddf4a\") " Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.916181 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p8tf\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-kube-api-access-2p8tf\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.916240 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/240ac959-0487-47d4-b219-7741b2127f50-cache\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.916276 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.916322 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/240ac959-0487-47d4-b219-7741b2127f50-lock\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.916368 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.916751 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: E1006 12:01:35.919484 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:01:35 crc kubenswrapper[4698]: E1006 12:01:35.919529 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.919551 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/240ac959-0487-47d4-b219-7741b2127f50-cache\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: E1006 12:01:35.919605 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift podName:240ac959-0487-47d4-b219-7741b2127f50 nodeName:}" failed. No retries permitted until 2025-10-06 12:01:36.419564942 +0000 UTC m=+983.832257115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift") pod "swift-storage-0" (UID: "240ac959-0487-47d4-b219-7741b2127f50") : configmap "swift-ring-files" not found Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.920410 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/240ac959-0487-47d4-b219-7741b2127f50-lock\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.925442 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5de6634-2580-44af-b395-ef15843ddf4a-kube-api-access-dr66q" (OuterVolumeSpecName: "kube-api-access-dr66q") pod "d5de6634-2580-44af-b395-ef15843ddf4a" (UID: "d5de6634-2580-44af-b395-ef15843ddf4a"). InnerVolumeSpecName "kube-api-access-dr66q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.942825 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p8tf\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-kube-api-access-2p8tf\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.948352 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.965369 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5de6634-2580-44af-b395-ef15843ddf4a" (UID: "d5de6634-2580-44af-b395-ef15843ddf4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.965279 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5de6634-2580-44af-b395-ef15843ddf4a" (UID: "d5de6634-2580-44af-b395-ef15843ddf4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:35 crc kubenswrapper[4698]: I1006 12:01:35.965472 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-config" (OuterVolumeSpecName: "config") pod "d5de6634-2580-44af-b395-ef15843ddf4a" (UID: "d5de6634-2580-44af-b395-ef15843ddf4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.018317 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.018352 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.018365 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr66q\" (UniqueName: \"kubernetes.io/projected/d5de6634-2580-44af-b395-ef15843ddf4a-kube-api-access-dr66q\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.018378 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5de6634-2580-44af-b395-ef15843ddf4a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.425355 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:36 crc kubenswrapper[4698]: E1006 12:01:36.425932 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:01:36 crc kubenswrapper[4698]: E1006 12:01:36.425956 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:01:36 crc kubenswrapper[4698]: E1006 12:01:36.426030 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift podName:240ac959-0487-47d4-b219-7741b2127f50 nodeName:}" failed. No retries permitted until 2025-10-06 12:01:37.425989242 +0000 UTC m=+984.838681415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift") pod "swift-storage-0" (UID: "240ac959-0487-47d4-b219-7741b2127f50") : configmap "swift-ring-files" not found Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.460116 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.460849 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-28x2s" event={"ID":"d5de6634-2580-44af-b395-ef15843ddf4a","Type":"ContainerDied","Data":"de35478ebd6f479cf0f3343a0c2c52f727b38d67796320bbf907ff2dd6ab917b"} Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.460914 4698 scope.go:117] "RemoveContainer" containerID="ad861c454211d6680f27d49b23ca4735832fdbff312cf89f9182c8e4ed2c717f" Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.467757 4698 generic.go:334] "Generic (PLEG): container finished" podID="7730f6e1-8a03-463f-90d6-41d706536495" containerID="2d4263676dd70ee5d46cd8aec9d16e8fff62f01b070c06715b30d200a2ac167a" exitCode=0 Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.467812 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fxtdm" event={"ID":"7730f6e1-8a03-463f-90d6-41d706536495","Type":"ContainerDied","Data":"2d4263676dd70ee5d46cd8aec9d16e8fff62f01b070c06715b30d200a2ac167a"} Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.467845 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fxtdm" event={"ID":"7730f6e1-8a03-463f-90d6-41d706536495","Type":"ContainerStarted","Data":"e221609c2e2504ada71329df395c93df3abc77e688970937dcb37f2a69644644"} Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.480372 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" event={"ID":"b2ab81b0-ecac-42e4-a174-068580a0feb1","Type":"ContainerStarted","Data":"319ada742d09b16707d9f21e30044976dd0c574d574987c5a825cae9c82a21d0"} Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.480662 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.583371 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" podStartSLOduration=4.583345316 podStartE2EDuration="4.583345316s" podCreationTimestamp="2025-10-06 12:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:01:36.567391059 +0000 UTC m=+983.980083232" watchObservedRunningTime="2025-10-06 12:01:36.583345316 +0000 UTC m=+983.996037489" Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.654077 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-28x2s"] Oct 06 12:01:36 crc kubenswrapper[4698]: I1006 12:01:36.660965 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-28x2s"] Oct 06 12:01:37 crc kubenswrapper[4698]: I1006 12:01:37.340606 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5de6634-2580-44af-b395-ef15843ddf4a" path="/var/lib/kubelet/pods/d5de6634-2580-44af-b395-ef15843ddf4a/volumes" Oct 06 12:01:37 crc kubenswrapper[4698]: I1006 12:01:37.460615 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:37 crc kubenswrapper[4698]: E1006 12:01:37.460748 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:01:37 crc kubenswrapper[4698]: E1006 12:01:37.460783 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:01:37 crc kubenswrapper[4698]: E1006 12:01:37.460854 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift podName:240ac959-0487-47d4-b219-7741b2127f50 nodeName:}" failed. No retries permitted until 2025-10-06 12:01:39.460828249 +0000 UTC m=+986.873520422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift") pod "swift-storage-0" (UID: "240ac959-0487-47d4-b219-7741b2127f50") : configmap "swift-ring-files" not found Oct 06 12:01:37 crc kubenswrapper[4698]: I1006 12:01:37.499588 4698 generic.go:334] "Generic (PLEG): container finished" podID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerID="83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b" exitCode=0 Oct 06 12:01:37 crc kubenswrapper[4698]: I1006 12:01:37.499652 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aca88314-f6aa-4d15-8c81-2a4c66d4297f","Type":"ContainerDied","Data":"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b"} Oct 06 12:01:37 crc kubenswrapper[4698]: I1006 12:01:37.506206 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fxtdm" event={"ID":"7730f6e1-8a03-463f-90d6-41d706536495","Type":"ContainerStarted","Data":"f39d036aa9958cfb84c55ffb75b12c489d67f27e2ab7854d72168110d8ddbd5d"} Oct 06 12:01:38 crc kubenswrapper[4698]: I1006 12:01:38.513896 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:38 crc kubenswrapper[4698]: I1006 12:01:38.538545 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-fxtdm" podStartSLOduration=4.53852888 podStartE2EDuration="4.53852888s" podCreationTimestamp="2025-10-06 12:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:01:38.532688375 +0000 UTC m=+985.945380548" watchObservedRunningTime="2025-10-06 12:01:38.53852888 +0000 UTC m=+985.951221053" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.513435 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:39 crc kubenswrapper[4698]: E1006 12:01:39.513873 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:01:39 crc kubenswrapper[4698]: E1006 12:01:39.513921 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:01:39 crc kubenswrapper[4698]: E1006 12:01:39.514005 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift podName:240ac959-0487-47d4-b219-7741b2127f50 nodeName:}" failed. No retries permitted until 2025-10-06 12:01:43.513981534 +0000 UTC m=+990.926673697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift") pod "swift-storage-0" (UID: "240ac959-0487-47d4-b219-7741b2127f50") : configmap "swift-ring-files" not found Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.616183 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-lhjtp"] Oct 06 12:01:39 crc kubenswrapper[4698]: E1006 12:01:39.616523 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5de6634-2580-44af-b395-ef15843ddf4a" containerName="init" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.616538 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5de6634-2580-44af-b395-ef15843ddf4a" containerName="init" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.616728 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5de6634-2580-44af-b395-ef15843ddf4a" containerName="init" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.618064 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.623865 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.625139 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.626894 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.645784 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lhjtp"] Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.719827 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-scripts\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.719937 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-swiftconf\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.720036 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-etc-swift\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.720060 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-ring-data-devices\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.720285 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-dispersionconf\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.720472 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-combined-ca-bundle\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.720547 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjr4\" (UniqueName: \"kubernetes.io/projected/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-kube-api-access-jvjr4\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.822315 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-dispersionconf\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.822410 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-combined-ca-bundle\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.822445 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjr4\" (UniqueName: \"kubernetes.io/projected/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-kube-api-access-jvjr4\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.822498 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-scripts\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.822560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-swiftconf\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.822593 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-etc-swift\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.822620 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-ring-data-devices\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.823434 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-etc-swift\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.823583 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-ring-data-devices\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.823678 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-scripts\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.829963 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-swiftconf\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.830243 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-combined-ca-bundle\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.846165 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-dispersionconf\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.851217 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjr4\" (UniqueName: \"kubernetes.io/projected/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-kube-api-access-jvjr4\") pod \"swift-ring-rebalance-lhjtp\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:39 crc kubenswrapper[4698]: I1006 12:01:39.936598 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:01:40 crc kubenswrapper[4698]: I1006 12:01:40.481286 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lhjtp"] Oct 06 12:01:40 crc kubenswrapper[4698]: I1006 12:01:40.531377 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lhjtp" event={"ID":"44a2d222-9a03-4483-a9dd-2708e7b3a5c7","Type":"ContainerStarted","Data":"9d488d860f04224510c415d9ddd35df0471ce301788d8be208df231aa3b276a4"} Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.151232 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.152194 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.261793 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.261856 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.437603 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.551594 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"711c60fb-212e-45d1-87c3-c15a97c60f90","Type":"ContainerStarted","Data":"1e247cc8d7fb5c49ef930a9bb3f6e15deabe03b08afb20d07c5052d3cdb0d602"} Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.601484 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.915447 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-f7cgj"] Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.918663 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-f7cgj" Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.921056 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-f7cgj"] Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.981315 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:42 crc kubenswrapper[4698]: I1006 12:01:42.992879 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6dv\" (UniqueName: \"kubernetes.io/projected/452481c1-a46c-47b7-ab29-6a3b3628197d-kube-api-access-jh6dv\") pod \"glance-db-create-f7cgj\" (UID: \"452481c1-a46c-47b7-ab29-6a3b3628197d\") " pod="openstack/glance-db-create-f7cgj" Oct 06 12:01:43 crc kubenswrapper[4698]: I1006 12:01:43.095162 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6dv\" (UniqueName: \"kubernetes.io/projected/452481c1-a46c-47b7-ab29-6a3b3628197d-kube-api-access-jh6dv\") pod \"glance-db-create-f7cgj\" (UID: \"452481c1-a46c-47b7-ab29-6a3b3628197d\") " pod="openstack/glance-db-create-f7cgj" Oct 06 12:01:43 crc kubenswrapper[4698]: I1006 12:01:43.116853 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6dv\" (UniqueName: \"kubernetes.io/projected/452481c1-a46c-47b7-ab29-6a3b3628197d-kube-api-access-jh6dv\") pod \"glance-db-create-f7cgj\" (UID: \"452481c1-a46c-47b7-ab29-6a3b3628197d\") " pod="openstack/glance-db-create-f7cgj" Oct 06 12:01:43 crc kubenswrapper[4698]: I1006 12:01:43.260111 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-f7cgj" Oct 06 12:01:43 crc kubenswrapper[4698]: I1006 12:01:43.508725 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:43 crc kubenswrapper[4698]: I1006 12:01:43.572205 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"711c60fb-212e-45d1-87c3-c15a97c60f90","Type":"ContainerStarted","Data":"a5e01964e23c234e1edadef6064dfe93d7aa11cbb3145e9288a41f04ce48f4bc"} Oct 06 12:01:43 crc kubenswrapper[4698]: I1006 12:01:43.572567 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 06 12:01:43 crc kubenswrapper[4698]: I1006 12:01:43.576160 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 06 12:01:43 crc kubenswrapper[4698]: I1006 12:01:43.610701 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:43 crc kubenswrapper[4698]: E1006 12:01:43.611924 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:01:43 crc kubenswrapper[4698]: E1006 12:01:43.611957 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:01:43 crc kubenswrapper[4698]: E1006 12:01:43.612032 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift podName:240ac959-0487-47d4-b219-7741b2127f50 nodeName:}" failed. No retries permitted until 2025-10-06 12:01:51.611994748 +0000 UTC m=+999.024686921 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift") pod "swift-storage-0" (UID: "240ac959-0487-47d4-b219-7741b2127f50") : configmap "swift-ring-files" not found Oct 06 12:01:43 crc kubenswrapper[4698]: I1006 12:01:43.618287 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=7.439194232 podStartE2EDuration="10.618260643s" podCreationTimestamp="2025-10-06 12:01:33 +0000 UTC" firstStartedPulling="2025-10-06 12:01:35.018627575 +0000 UTC m=+982.431319758" lastFinishedPulling="2025-10-06 12:01:38.197693996 +0000 UTC m=+985.610386169" observedRunningTime="2025-10-06 12:01:43.598694327 +0000 UTC m=+991.011386500" watchObservedRunningTime="2025-10-06 12:01:43.618260643 +0000 UTC m=+991.030952816" Oct 06 12:01:43 crc kubenswrapper[4698]: I1006 12:01:43.801338 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-f7cgj"] Oct 06 12:01:43 crc kubenswrapper[4698]: W1006 12:01:43.822718 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod452481c1_a46c_47b7_ab29_6a3b3628197d.slice/crio-db45e8046c22e2e829ea47b7f6644753bda50979daf365bc763f7fc1190f6671 WatchSource:0}: Error finding container db45e8046c22e2e829ea47b7f6644753bda50979daf365bc763f7fc1190f6671: Status 404 returned error can't find the container with id db45e8046c22e2e829ea47b7f6644753bda50979daf365bc763f7fc1190f6671 Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.511263 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-c4w74"] Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.512994 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-c4w74" Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.525162 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-c4w74"] Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.549940 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps48t\" (UniqueName: \"kubernetes.io/projected/b244c105-6d87-4a82-865f-a9304464b946-kube-api-access-ps48t\") pod \"watcher-db-create-c4w74\" (UID: \"b244c105-6d87-4a82-865f-a9304464b946\") " pod="openstack/watcher-db-create-c4w74" Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.597538 4698 generic.go:334] "Generic (PLEG): container finished" podID="452481c1-a46c-47b7-ab29-6a3b3628197d" containerID="bc3a819b1ed1775cc93380a97ca0a14e7fdf5dc353a2c047e337fc9b5d122981" exitCode=0 Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.598003 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-f7cgj" event={"ID":"452481c1-a46c-47b7-ab29-6a3b3628197d","Type":"ContainerDied","Data":"bc3a819b1ed1775cc93380a97ca0a14e7fdf5dc353a2c047e337fc9b5d122981"} Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.598073 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-f7cgj" event={"ID":"452481c1-a46c-47b7-ab29-6a3b3628197d","Type":"ContainerStarted","Data":"db45e8046c22e2e829ea47b7f6644753bda50979daf365bc763f7fc1190f6671"} Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.651248 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps48t\" (UniqueName: \"kubernetes.io/projected/b244c105-6d87-4a82-865f-a9304464b946-kube-api-access-ps48t\") pod \"watcher-db-create-c4w74\" (UID: \"b244c105-6d87-4a82-865f-a9304464b946\") " pod="openstack/watcher-db-create-c4w74" Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.692950 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps48t\" (UniqueName: \"kubernetes.io/projected/b244c105-6d87-4a82-865f-a9304464b946-kube-api-access-ps48t\") pod \"watcher-db-create-c4w74\" (UID: \"b244c105-6d87-4a82-865f-a9304464b946\") " pod="openstack/watcher-db-create-c4w74" Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.852477 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-c4w74" Oct 06 12:01:44 crc kubenswrapper[4698]: I1006 12:01:44.957255 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:01:45 crc kubenswrapper[4698]: I1006 12:01:45.020666 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6h4ch"] Oct 06 12:01:45 crc kubenswrapper[4698]: I1006 12:01:45.020914 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" podUID="b2ab81b0-ecac-42e4-a174-068580a0feb1" containerName="dnsmasq-dns" containerID="cri-o://319ada742d09b16707d9f21e30044976dd0c574d574987c5a825cae9c82a21d0" gracePeriod=10 Oct 06 12:01:45 crc kubenswrapper[4698]: I1006 12:01:45.616890 4698 generic.go:334] "Generic (PLEG): container finished" podID="b2ab81b0-ecac-42e4-a174-068580a0feb1" containerID="319ada742d09b16707d9f21e30044976dd0c574d574987c5a825cae9c82a21d0" exitCode=0 Oct 06 12:01:45 crc kubenswrapper[4698]: I1006 12:01:45.616938 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" event={"ID":"b2ab81b0-ecac-42e4-a174-068580a0feb1","Type":"ContainerDied","Data":"319ada742d09b16707d9f21e30044976dd0c574d574987c5a825cae9c82a21d0"} Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.642253 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.667840 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-f7cgj" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.668627 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-f7cgj" event={"ID":"452481c1-a46c-47b7-ab29-6a3b3628197d","Type":"ContainerDied","Data":"db45e8046c22e2e829ea47b7f6644753bda50979daf365bc763f7fc1190f6671"} Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.668679 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db45e8046c22e2e829ea47b7f6644753bda50979daf365bc763f7fc1190f6671" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.683812 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" event={"ID":"b2ab81b0-ecac-42e4-a174-068580a0feb1","Type":"ContainerDied","Data":"28be1ff6693f2e9a545f1d28b1c9949d869781b72ae0d267f79030648ba38584"} Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.683878 4698 scope.go:117] "RemoveContainer" containerID="319ada742d09b16707d9f21e30044976dd0c574d574987c5a825cae9c82a21d0" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.683960 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.753649 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-sb\") pod \"b2ab81b0-ecac-42e4-a174-068580a0feb1\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.753795 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hdmb\" (UniqueName: \"kubernetes.io/projected/b2ab81b0-ecac-42e4-a174-068580a0feb1-kube-api-access-8hdmb\") pod \"b2ab81b0-ecac-42e4-a174-068580a0feb1\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.753958 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh6dv\" (UniqueName: \"kubernetes.io/projected/452481c1-a46c-47b7-ab29-6a3b3628197d-kube-api-access-jh6dv\") pod \"452481c1-a46c-47b7-ab29-6a3b3628197d\" (UID: \"452481c1-a46c-47b7-ab29-6a3b3628197d\") " Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.754040 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-nb\") pod \"b2ab81b0-ecac-42e4-a174-068580a0feb1\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.754124 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-config\") pod \"b2ab81b0-ecac-42e4-a174-068580a0feb1\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.754175 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-dns-svc\") pod \"b2ab81b0-ecac-42e4-a174-068580a0feb1\" (UID: \"b2ab81b0-ecac-42e4-a174-068580a0feb1\") " Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.795231 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452481c1-a46c-47b7-ab29-6a3b3628197d-kube-api-access-jh6dv" (OuterVolumeSpecName: "kube-api-access-jh6dv") pod "452481c1-a46c-47b7-ab29-6a3b3628197d" (UID: "452481c1-a46c-47b7-ab29-6a3b3628197d"). InnerVolumeSpecName "kube-api-access-jh6dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.804229 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ab81b0-ecac-42e4-a174-068580a0feb1-kube-api-access-8hdmb" (OuterVolumeSpecName: "kube-api-access-8hdmb") pod "b2ab81b0-ecac-42e4-a174-068580a0feb1" (UID: "b2ab81b0-ecac-42e4-a174-068580a0feb1"). InnerVolumeSpecName "kube-api-access-8hdmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.844176 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b2ab81b0-ecac-42e4-a174-068580a0feb1" (UID: "b2ab81b0-ecac-42e4-a174-068580a0feb1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.844889 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2ab81b0-ecac-42e4-a174-068580a0feb1" (UID: "b2ab81b0-ecac-42e4-a174-068580a0feb1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.849949 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b2ab81b0-ecac-42e4-a174-068580a0feb1" (UID: "b2ab81b0-ecac-42e4-a174-068580a0feb1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.853124 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-config" (OuterVolumeSpecName: "config") pod "b2ab81b0-ecac-42e4-a174-068580a0feb1" (UID: "b2ab81b0-ecac-42e4-a174-068580a0feb1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.857514 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.857558 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hdmb\" (UniqueName: \"kubernetes.io/projected/b2ab81b0-ecac-42e4-a174-068580a0feb1-kube-api-access-8hdmb\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.857577 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh6dv\" (UniqueName: \"kubernetes.io/projected/452481c1-a46c-47b7-ab29-6a3b3628197d-kube-api-access-jh6dv\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.857592 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.857603 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:48 crc kubenswrapper[4698]: I1006 12:01:48.857612 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2ab81b0-ecac-42e4-a174-068580a0feb1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:49 crc kubenswrapper[4698]: I1006 12:01:49.020853 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6h4ch"] Oct 06 12:01:49 crc kubenswrapper[4698]: I1006 12:01:49.026312 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6h4ch"] Oct 06 12:01:49 crc kubenswrapper[4698]: I1006 12:01:49.363581 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ab81b0-ecac-42e4-a174-068580a0feb1" path="/var/lib/kubelet/pods/b2ab81b0-ecac-42e4-a174-068580a0feb1/volumes" Oct 06 12:01:49 crc kubenswrapper[4698]: I1006 12:01:49.693925 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-f7cgj" Oct 06 12:01:50 crc kubenswrapper[4698]: I1006 12:01:50.715460 4698 generic.go:334] "Generic (PLEG): container finished" podID="90c98585-3fd3-42cb-b011-01ecd1227057" containerID="e08d2a623d5b99981a53f1fd0656087540b5036f8e39b6556f7d21bc8e446234" exitCode=0 Oct 06 12:01:50 crc kubenswrapper[4698]: I1006 12:01:50.715568 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90c98585-3fd3-42cb-b011-01ecd1227057","Type":"ContainerDied","Data":"e08d2a623d5b99981a53f1fd0656087540b5036f8e39b6556f7d21bc8e446234"} Oct 06 12:01:51 crc kubenswrapper[4698]: I1006 12:01:51.714688 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:01:51 crc kubenswrapper[4698]: E1006 12:01:51.714994 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 12:01:51 crc kubenswrapper[4698]: E1006 12:01:51.715320 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 12:01:51 crc kubenswrapper[4698]: E1006 12:01:51.715467 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift podName:240ac959-0487-47d4-b219-7741b2127f50 nodeName:}" failed. No retries permitted until 2025-10-06 12:02:07.715411147 +0000 UTC m=+1015.128103330 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift") pod "swift-storage-0" (UID: "240ac959-0487-47d4-b219-7741b2127f50") : configmap "swift-ring-files" not found Oct 06 12:01:51 crc kubenswrapper[4698]: I1006 12:01:51.732342 4698 generic.go:334] "Generic (PLEG): container finished" podID="4815e17b-a929-4914-91e6-6e9b3ef94561" containerID="0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505" exitCode=0 Oct 06 12:01:51 crc kubenswrapper[4698]: I1006 12:01:51.732413 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4815e17b-a929-4914-91e6-6e9b3ef94561","Type":"ContainerDied","Data":"0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505"} Oct 06 12:01:51 crc kubenswrapper[4698]: I1006 12:01:51.804971 4698 scope.go:117] "RemoveContainer" containerID="d1b584c909fa54d92cdaee5447bd1cf40c35f6431710db3d35449f20d08f1319" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.212193 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rn46p"] Oct 06 12:01:52 crc kubenswrapper[4698]: E1006 12:01:52.213934 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ab81b0-ecac-42e4-a174-068580a0feb1" containerName="dnsmasq-dns" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.214086 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ab81b0-ecac-42e4-a174-068580a0feb1" containerName="dnsmasq-dns" Oct 06 12:01:52 crc kubenswrapper[4698]: E1006 12:01:52.214179 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ab81b0-ecac-42e4-a174-068580a0feb1" containerName="init" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.214244 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ab81b0-ecac-42e4-a174-068580a0feb1" containerName="init" Oct 06 12:01:52 crc kubenswrapper[4698]: E1006 12:01:52.214332 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452481c1-a46c-47b7-ab29-6a3b3628197d" containerName="mariadb-database-create" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.214394 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="452481c1-a46c-47b7-ab29-6a3b3628197d" containerName="mariadb-database-create" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.214832 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ab81b0-ecac-42e4-a174-068580a0feb1" containerName="dnsmasq-dns" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.214918 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="452481c1-a46c-47b7-ab29-6a3b3628197d" containerName="mariadb-database-create" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.218098 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rn46p" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.222093 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rn46p"] Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.285986 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-c4w74"] Oct 06 12:01:52 crc kubenswrapper[4698]: W1006 12:01:52.290929 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb244c105_6d87_4a82_865f_a9304464b946.slice/crio-752eab0f0e00f829f06d416bd39a759147972901b1514af3936f8769cc227399 WatchSource:0}: Error finding container 752eab0f0e00f829f06d416bd39a759147972901b1514af3936f8769cc227399: Status 404 returned error can't find the container with id 752eab0f0e00f829f06d416bd39a759147972901b1514af3936f8769cc227399 Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.332922 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrh7\" (UniqueName: \"kubernetes.io/projected/f0d18d85-a449-4bc6-9bc2-ba89b71e9125-kube-api-access-gsrh7\") pod \"keystone-db-create-rn46p\" (UID: \"f0d18d85-a449-4bc6-9bc2-ba89b71e9125\") " pod="openstack/keystone-db-create-rn46p" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.430275 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-sb8gh"] Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.431838 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sb8gh" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.435231 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsrh7\" (UniqueName: \"kubernetes.io/projected/f0d18d85-a449-4bc6-9bc2-ba89b71e9125-kube-api-access-gsrh7\") pod \"keystone-db-create-rn46p\" (UID: \"f0d18d85-a449-4bc6-9bc2-ba89b71e9125\") " pod="openstack/keystone-db-create-rn46p" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.453227 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sb8gh"] Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.481846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsrh7\" (UniqueName: \"kubernetes.io/projected/f0d18d85-a449-4bc6-9bc2-ba89b71e9125-kube-api-access-gsrh7\") pod \"keystone-db-create-rn46p\" (UID: \"f0d18d85-a449-4bc6-9bc2-ba89b71e9125\") " pod="openstack/keystone-db-create-rn46p" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.537469 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg6cb\" (UniqueName: \"kubernetes.io/projected/789c653a-f797-4245-8754-0de0cd335997-kube-api-access-rg6cb\") pod \"placement-db-create-sb8gh\" (UID: \"789c653a-f797-4245-8754-0de0cd335997\") " pod="openstack/placement-db-create-sb8gh" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.559817 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rn46p" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.642957 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg6cb\" (UniqueName: \"kubernetes.io/projected/789c653a-f797-4245-8754-0de0cd335997-kube-api-access-rg6cb\") pod \"placement-db-create-sb8gh\" (UID: \"789c653a-f797-4245-8754-0de0cd335997\") " pod="openstack/placement-db-create-sb8gh" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.664607 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg6cb\" (UniqueName: \"kubernetes.io/projected/789c653a-f797-4245-8754-0de0cd335997-kube-api-access-rg6cb\") pod \"placement-db-create-sb8gh\" (UID: \"789c653a-f797-4245-8754-0de0cd335997\") " pod="openstack/placement-db-create-sb8gh" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.768131 4698 generic.go:334] "Generic (PLEG): container finished" podID="b244c105-6d87-4a82-865f-a9304464b946" containerID="8cbe03762441a8cb9763254a3c018fd44a860251a8e5060154c1775e5fe995aa" exitCode=0 Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.768192 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-c4w74" event={"ID":"b244c105-6d87-4a82-865f-a9304464b946","Type":"ContainerDied","Data":"8cbe03762441a8cb9763254a3c018fd44a860251a8e5060154c1775e5fe995aa"} Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.768704 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-c4w74" event={"ID":"b244c105-6d87-4a82-865f-a9304464b946","Type":"ContainerStarted","Data":"752eab0f0e00f829f06d416bd39a759147972901b1514af3936f8769cc227399"} Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.773952 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4815e17b-a929-4914-91e6-6e9b3ef94561","Type":"ContainerStarted","Data":"c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16"} Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.774454 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.775512 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lhjtp" event={"ID":"44a2d222-9a03-4483-a9dd-2708e7b3a5c7","Type":"ContainerStarted","Data":"e9a65a89f6cd50f3396f6e578a8ec785cebb84dfa3ab61a36e67f534d2b5358c"} Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.778065 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90c98585-3fd3-42cb-b011-01ecd1227057","Type":"ContainerStarted","Data":"32b383e4c41c409e81720d1e2b0d2ceeb0a1bc921ec0ba7ec3db659eded7f7ea"} Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.778444 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.782109 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aca88314-f6aa-4d15-8c81-2a4c66d4297f","Type":"ContainerStarted","Data":"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905"} Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.786613 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sb8gh" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.826632 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.826616348 podStartE2EDuration="54.826616348s" podCreationTimestamp="2025-10-06 12:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:01:52.82507513 +0000 UTC m=+1000.237767303" watchObservedRunningTime="2025-10-06 12:01:52.826616348 +0000 UTC m=+1000.239308521" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.897937 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.240600609 podStartE2EDuration="55.897918987s" podCreationTimestamp="2025-10-06 12:00:57 +0000 UTC" firstStartedPulling="2025-10-06 12:01:04.894111013 +0000 UTC m=+952.306803186" lastFinishedPulling="2025-10-06 12:01:16.551429391 +0000 UTC m=+963.964121564" observedRunningTime="2025-10-06 12:01:52.894709147 +0000 UTC m=+1000.307401330" watchObservedRunningTime="2025-10-06 12:01:52.897918987 +0000 UTC m=+1000.310611160" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.930152 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-lhjtp" podStartSLOduration=2.642499668 podStartE2EDuration="13.930127756s" podCreationTimestamp="2025-10-06 12:01:39 +0000 UTC" firstStartedPulling="2025-10-06 12:01:40.524371794 +0000 UTC m=+987.937063967" lastFinishedPulling="2025-10-06 12:01:51.811999882 +0000 UTC m=+999.224692055" observedRunningTime="2025-10-06 12:01:52.924772233 +0000 UTC m=+1000.337464406" watchObservedRunningTime="2025-10-06 12:01:52.930127756 +0000 UTC m=+1000.342819929" Oct 06 12:01:52 crc kubenswrapper[4698]: I1006 12:01:52.979742 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-6h4ch" podUID="b2ab81b0-ecac-42e4-a174-068580a0feb1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Oct 06 12:01:53 crc kubenswrapper[4698]: I1006 12:01:53.125070 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rn46p"] Oct 06 12:01:53 crc kubenswrapper[4698]: I1006 12:01:53.451748 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sb8gh"] Oct 06 12:01:53 crc kubenswrapper[4698]: I1006 12:01:53.793447 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sb8gh" event={"ID":"789c653a-f797-4245-8754-0de0cd335997","Type":"ContainerStarted","Data":"4afa5dc5e535d0ef8845be03cb47984ad194ff11540ac6922c1db269c56a8f93"} Oct 06 12:01:53 crc kubenswrapper[4698]: I1006 12:01:53.795389 4698 generic.go:334] "Generic (PLEG): container finished" podID="f0d18d85-a449-4bc6-9bc2-ba89b71e9125" containerID="8d4b2b31260bb8b1019ed2aea6f60b0fecc6b87181a639b24ee1baa6775a6df6" exitCode=0 Oct 06 12:01:53 crc kubenswrapper[4698]: I1006 12:01:53.795724 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rn46p" event={"ID":"f0d18d85-a449-4bc6-9bc2-ba89b71e9125","Type":"ContainerDied","Data":"8d4b2b31260bb8b1019ed2aea6f60b0fecc6b87181a639b24ee1baa6775a6df6"} Oct 06 12:01:53 crc kubenswrapper[4698]: I1006 12:01:53.795812 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rn46p" event={"ID":"f0d18d85-a449-4bc6-9bc2-ba89b71e9125","Type":"ContainerStarted","Data":"bbbbaeec5f390f29e0ea236a8ceff2b3242b02a990bfc81261d3220b3129aa24"} Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.296750 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-c4w74" Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.389650 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.393411 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps48t\" (UniqueName: \"kubernetes.io/projected/b244c105-6d87-4a82-865f-a9304464b946-kube-api-access-ps48t\") pod \"b244c105-6d87-4a82-865f-a9304464b946\" (UID: \"b244c105-6d87-4a82-865f-a9304464b946\") " Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.400453 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b244c105-6d87-4a82-865f-a9304464b946-kube-api-access-ps48t" (OuterVolumeSpecName: "kube-api-access-ps48t") pod "b244c105-6d87-4a82-865f-a9304464b946" (UID: "b244c105-6d87-4a82-865f-a9304464b946"). InnerVolumeSpecName "kube-api-access-ps48t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.496239 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps48t\" (UniqueName: \"kubernetes.io/projected/b244c105-6d87-4a82-865f-a9304464b946-kube-api-access-ps48t\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.807605 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-c4w74" event={"ID":"b244c105-6d87-4a82-865f-a9304464b946","Type":"ContainerDied","Data":"752eab0f0e00f829f06d416bd39a759147972901b1514af3936f8769cc227399"} Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.807677 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="752eab0f0e00f829f06d416bd39a759147972901b1514af3936f8769cc227399" Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.807641 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-c4w74" Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.810752 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aca88314-f6aa-4d15-8c81-2a4c66d4297f","Type":"ContainerStarted","Data":"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5"} Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.812599 4698 generic.go:334] "Generic (PLEG): container finished" podID="789c653a-f797-4245-8754-0de0cd335997" containerID="7f8976f7d80858b1c3ac2ec52cc4d521b9b053a6deccba6a870bc8f415e5d943" exitCode=0 Oct 06 12:01:54 crc kubenswrapper[4698]: I1006 12:01:54.812708 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sb8gh" event={"ID":"789c653a-f797-4245-8754-0de0cd335997","Type":"ContainerDied","Data":"7f8976f7d80858b1c3ac2ec52cc4d521b9b053a6deccba6a870bc8f415e5d943"} Oct 06 12:01:55 crc kubenswrapper[4698]: I1006 12:01:55.189566 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rn46p" Oct 06 12:01:55 crc kubenswrapper[4698]: I1006 12:01:55.311196 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsrh7\" (UniqueName: \"kubernetes.io/projected/f0d18d85-a449-4bc6-9bc2-ba89b71e9125-kube-api-access-gsrh7\") pod \"f0d18d85-a449-4bc6-9bc2-ba89b71e9125\" (UID: \"f0d18d85-a449-4bc6-9bc2-ba89b71e9125\") " Oct 06 12:01:55 crc kubenswrapper[4698]: I1006 12:01:55.327330 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d18d85-a449-4bc6-9bc2-ba89b71e9125-kube-api-access-gsrh7" (OuterVolumeSpecName: "kube-api-access-gsrh7") pod "f0d18d85-a449-4bc6-9bc2-ba89b71e9125" (UID: "f0d18d85-a449-4bc6-9bc2-ba89b71e9125"). InnerVolumeSpecName "kube-api-access-gsrh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:55 crc kubenswrapper[4698]: I1006 12:01:55.413718 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsrh7\" (UniqueName: \"kubernetes.io/projected/f0d18d85-a449-4bc6-9bc2-ba89b71e9125-kube-api-access-gsrh7\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:55 crc kubenswrapper[4698]: I1006 12:01:55.828995 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rn46p" event={"ID":"f0d18d85-a449-4bc6-9bc2-ba89b71e9125","Type":"ContainerDied","Data":"bbbbaeec5f390f29e0ea236a8ceff2b3242b02a990bfc81261d3220b3129aa24"} Oct 06 12:01:55 crc kubenswrapper[4698]: I1006 12:01:55.829073 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbbbaeec5f390f29e0ea236a8ceff2b3242b02a990bfc81261d3220b3129aa24" Oct 06 12:01:55 crc kubenswrapper[4698]: I1006 12:01:55.829146 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rn46p" Oct 06 12:01:56 crc kubenswrapper[4698]: I1006 12:01:56.212362 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sb8gh" Oct 06 12:01:56 crc kubenswrapper[4698]: I1006 12:01:56.333228 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg6cb\" (UniqueName: \"kubernetes.io/projected/789c653a-f797-4245-8754-0de0cd335997-kube-api-access-rg6cb\") pod \"789c653a-f797-4245-8754-0de0cd335997\" (UID: \"789c653a-f797-4245-8754-0de0cd335997\") " Oct 06 12:01:56 crc kubenswrapper[4698]: I1006 12:01:56.341319 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789c653a-f797-4245-8754-0de0cd335997-kube-api-access-rg6cb" (OuterVolumeSpecName: "kube-api-access-rg6cb") pod "789c653a-f797-4245-8754-0de0cd335997" (UID: "789c653a-f797-4245-8754-0de0cd335997"). InnerVolumeSpecName "kube-api-access-rg6cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:01:56 crc kubenswrapper[4698]: I1006 12:01:56.436163 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg6cb\" (UniqueName: \"kubernetes.io/projected/789c653a-f797-4245-8754-0de0cd335997-kube-api-access-rg6cb\") on node \"crc\" DevicePath \"\"" Oct 06 12:01:56 crc kubenswrapper[4698]: I1006 12:01:56.840277 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sb8gh" event={"ID":"789c653a-f797-4245-8754-0de0cd335997","Type":"ContainerDied","Data":"4afa5dc5e535d0ef8845be03cb47984ad194ff11540ac6922c1db269c56a8f93"} Oct 06 12:01:56 crc kubenswrapper[4698]: I1006 12:01:56.840358 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4afa5dc5e535d0ef8845be03cb47984ad194ff11540ac6922c1db269c56a8f93" Oct 06 12:01:56 crc kubenswrapper[4698]: I1006 12:01:56.840453 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sb8gh" Oct 06 12:01:58 crc kubenswrapper[4698]: I1006 12:01:58.595096 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qmjmg" podUID="7dd3b0e2-4d06-4c91-8539-4db08c7f2d23" containerName="ovn-controller" probeResult="failure" output=< Oct 06 12:01:58 crc kubenswrapper[4698]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 06 12:01:58 crc kubenswrapper[4698]: > Oct 06 12:01:58 crc kubenswrapper[4698]: I1006 12:01:58.869219 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aca88314-f6aa-4d15-8c81-2a4c66d4297f","Type":"ContainerStarted","Data":"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f"} Oct 06 12:01:58 crc kubenswrapper[4698]: I1006 12:01:58.907069 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.172114977 podStartE2EDuration="54.907040742s" podCreationTimestamp="2025-10-06 12:01:04 +0000 UTC" firstStartedPulling="2025-10-06 12:01:17.768973298 +0000 UTC m=+965.181665471" lastFinishedPulling="2025-10-06 12:01:58.503899063 +0000 UTC m=+1005.916591236" observedRunningTime="2025-10-06 12:01:58.897924295 +0000 UTC m=+1006.310616508" watchObservedRunningTime="2025-10-06 12:01:58.907040742 +0000 UTC m=+1006.319732955" Oct 06 12:02:00 crc kubenswrapper[4698]: I1006 12:02:00.795145 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:00 crc kubenswrapper[4698]: I1006 12:02:00.896140 4698 generic.go:334] "Generic (PLEG): container finished" podID="44a2d222-9a03-4483-a9dd-2708e7b3a5c7" containerID="e9a65a89f6cd50f3396f6e578a8ec785cebb84dfa3ab61a36e67f534d2b5358c" exitCode=0 Oct 06 12:02:00 crc kubenswrapper[4698]: I1006 12:02:00.896192 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lhjtp" event={"ID":"44a2d222-9a03-4483-a9dd-2708e7b3a5c7","Type":"ContainerDied","Data":"e9a65a89f6cd50f3396f6e578a8ec785cebb84dfa3ab61a36e67f534d2b5358c"} Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.286061 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f541-account-create-hq766"] Oct 06 12:02:02 crc kubenswrapper[4698]: E1006 12:02:02.286946 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789c653a-f797-4245-8754-0de0cd335997" containerName="mariadb-database-create" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.286963 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="789c653a-f797-4245-8754-0de0cd335997" containerName="mariadb-database-create" Oct 06 12:02:02 crc kubenswrapper[4698]: E1006 12:02:02.287041 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b244c105-6d87-4a82-865f-a9304464b946" containerName="mariadb-database-create" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.287050 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b244c105-6d87-4a82-865f-a9304464b946" containerName="mariadb-database-create" Oct 06 12:02:02 crc kubenswrapper[4698]: E1006 12:02:02.287064 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d18d85-a449-4bc6-9bc2-ba89b71e9125" containerName="mariadb-database-create" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.287072 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d18d85-a449-4bc6-9bc2-ba89b71e9125" containerName="mariadb-database-create" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.287282 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="789c653a-f797-4245-8754-0de0cd335997" containerName="mariadb-database-create" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.287312 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b244c105-6d87-4a82-865f-a9304464b946" containerName="mariadb-database-create" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.287333 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d18d85-a449-4bc6-9bc2-ba89b71e9125" containerName="mariadb-database-create" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.288159 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f541-account-create-hq766" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.290905 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.318696 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f541-account-create-hq766"] Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.341634 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.387924 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-scripts\") pod \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.388204 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvjr4\" (UniqueName: \"kubernetes.io/projected/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-kube-api-access-jvjr4\") pod \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.388307 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-etc-swift\") pod \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.388336 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-dispersionconf\") pod \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.389874 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "44a2d222-9a03-4483-a9dd-2708e7b3a5c7" (UID: "44a2d222-9a03-4483-a9dd-2708e7b3a5c7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.389970 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-combined-ca-bundle\") pod \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.390162 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-swiftconf\") pod \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.390259 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-ring-data-devices\") pod \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\" (UID: \"44a2d222-9a03-4483-a9dd-2708e7b3a5c7\") " Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.391301 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "44a2d222-9a03-4483-a9dd-2708e7b3a5c7" (UID: "44a2d222-9a03-4483-a9dd-2708e7b3a5c7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.391763 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ncjm\" (UniqueName: \"kubernetes.io/projected/272010f5-1745-47b9-bf97-af5335394b6f-kube-api-access-7ncjm\") pod \"keystone-f541-account-create-hq766\" (UID: \"272010f5-1745-47b9-bf97-af5335394b6f\") " pod="openstack/keystone-f541-account-create-hq766" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.392356 4698 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.392401 4698 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.395289 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-kube-api-access-jvjr4" (OuterVolumeSpecName: "kube-api-access-jvjr4") pod "44a2d222-9a03-4483-a9dd-2708e7b3a5c7" (UID: "44a2d222-9a03-4483-a9dd-2708e7b3a5c7"). InnerVolumeSpecName "kube-api-access-jvjr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.400514 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "44a2d222-9a03-4483-a9dd-2708e7b3a5c7" (UID: "44a2d222-9a03-4483-a9dd-2708e7b3a5c7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.417215 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "44a2d222-9a03-4483-a9dd-2708e7b3a5c7" (UID: "44a2d222-9a03-4483-a9dd-2708e7b3a5c7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.417851 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-scripts" (OuterVolumeSpecName: "scripts") pod "44a2d222-9a03-4483-a9dd-2708e7b3a5c7" (UID: "44a2d222-9a03-4483-a9dd-2708e7b3a5c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.423974 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44a2d222-9a03-4483-a9dd-2708e7b3a5c7" (UID: "44a2d222-9a03-4483-a9dd-2708e7b3a5c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.458534 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bd25-account-create-hrjlh"] Oct 06 12:02:02 crc kubenswrapper[4698]: E1006 12:02:02.458902 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a2d222-9a03-4483-a9dd-2708e7b3a5c7" containerName="swift-ring-rebalance" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.458919 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a2d222-9a03-4483-a9dd-2708e7b3a5c7" containerName="swift-ring-rebalance" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.459134 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a2d222-9a03-4483-a9dd-2708e7b3a5c7" containerName="swift-ring-rebalance" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.459721 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bd25-account-create-hrjlh" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.462074 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.469140 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bd25-account-create-hrjlh"] Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.517895 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ncjm\" (UniqueName: \"kubernetes.io/projected/272010f5-1745-47b9-bf97-af5335394b6f-kube-api-access-7ncjm\") pod \"keystone-f541-account-create-hq766\" (UID: \"272010f5-1745-47b9-bf97-af5335394b6f\") " pod="openstack/keystone-f541-account-create-hq766" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.518066 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnd7q\" (UniqueName: \"kubernetes.io/projected/527d6d0f-30f7-4a96-866f-8392b12057b3-kube-api-access-vnd7q\") pod \"placement-bd25-account-create-hrjlh\" (UID: \"527d6d0f-30f7-4a96-866f-8392b12057b3\") " pod="openstack/placement-bd25-account-create-hrjlh" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.518168 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.518184 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvjr4\" (UniqueName: \"kubernetes.io/projected/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-kube-api-access-jvjr4\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.518198 4698 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.518211 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.518221 4698 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/44a2d222-9a03-4483-a9dd-2708e7b3a5c7-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.547572 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ncjm\" (UniqueName: \"kubernetes.io/projected/272010f5-1745-47b9-bf97-af5335394b6f-kube-api-access-7ncjm\") pod \"keystone-f541-account-create-hq766\" (UID: \"272010f5-1745-47b9-bf97-af5335394b6f\") " pod="openstack/keystone-f541-account-create-hq766" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.619953 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnd7q\" (UniqueName: \"kubernetes.io/projected/527d6d0f-30f7-4a96-866f-8392b12057b3-kube-api-access-vnd7q\") pod \"placement-bd25-account-create-hrjlh\" (UID: \"527d6d0f-30f7-4a96-866f-8392b12057b3\") " pod="openstack/placement-bd25-account-create-hrjlh" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.639609 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnd7q\" (UniqueName: \"kubernetes.io/projected/527d6d0f-30f7-4a96-866f-8392b12057b3-kube-api-access-vnd7q\") pod \"placement-bd25-account-create-hrjlh\" (UID: \"527d6d0f-30f7-4a96-866f-8392b12057b3\") " pod="openstack/placement-bd25-account-create-hrjlh" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.659173 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f541-account-create-hq766" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.834276 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bd25-account-create-hrjlh" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.926108 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lhjtp" event={"ID":"44a2d222-9a03-4483-a9dd-2708e7b3a5c7","Type":"ContainerDied","Data":"9d488d860f04224510c415d9ddd35df0471ce301788d8be208df231aa3b276a4"} Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.926147 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d488d860f04224510c415d9ddd35df0471ce301788d8be208df231aa3b276a4" Oct 06 12:02:02 crc kubenswrapper[4698]: I1006 12:02:02.926384 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lhjtp" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:02.998991 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f5be-account-create-w2bq6"] Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.000848 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f5be-account-create-w2bq6" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.002733 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.006278 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f5be-account-create-w2bq6"] Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.115911 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f541-account-create-hq766"] Oct 06 12:02:03 crc kubenswrapper[4698]: W1006 12:02:03.119508 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod272010f5_1745_47b9_bf97_af5335394b6f.slice/crio-450886e4fa3ced4d816c82eeef4f1f314a25cae246447c492f8fbcf83e116eae WatchSource:0}: Error finding container 450886e4fa3ced4d816c82eeef4f1f314a25cae246447c492f8fbcf83e116eae: Status 404 returned error can't find the container with id 450886e4fa3ced4d816c82eeef4f1f314a25cae246447c492f8fbcf83e116eae Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.132334 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pdd4\" (UniqueName: \"kubernetes.io/projected/47bbda08-fa2d-4bad-af25-163aabf96973-kube-api-access-7pdd4\") pod \"glance-f5be-account-create-w2bq6\" (UID: \"47bbda08-fa2d-4bad-af25-163aabf96973\") " pod="openstack/glance-f5be-account-create-w2bq6" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.234210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pdd4\" (UniqueName: \"kubernetes.io/projected/47bbda08-fa2d-4bad-af25-163aabf96973-kube-api-access-7pdd4\") pod \"glance-f5be-account-create-w2bq6\" (UID: \"47bbda08-fa2d-4bad-af25-163aabf96973\") " pod="openstack/glance-f5be-account-create-w2bq6" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.267875 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pdd4\" (UniqueName: \"kubernetes.io/projected/47bbda08-fa2d-4bad-af25-163aabf96973-kube-api-access-7pdd4\") pod \"glance-f5be-account-create-w2bq6\" (UID: \"47bbda08-fa2d-4bad-af25-163aabf96973\") " pod="openstack/glance-f5be-account-create-w2bq6" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.335789 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f5be-account-create-w2bq6" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.346935 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bd25-account-create-hrjlh"] Oct 06 12:02:03 crc kubenswrapper[4698]: W1006 12:02:03.364747 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod527d6d0f_30f7_4a96_866f_8392b12057b3.slice/crio-ce417d93fca4fef729ffa7d935f9e348b7cf8ab091b59afe61e13a08d2ecfeca WatchSource:0}: Error finding container ce417d93fca4fef729ffa7d935f9e348b7cf8ab091b59afe61e13a08d2ecfeca: Status 404 returned error can't find the container with id ce417d93fca4fef729ffa7d935f9e348b7cf8ab091b59afe61e13a08d2ecfeca Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.615128 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qmjmg" podUID="7dd3b0e2-4d06-4c91-8539-4db08c7f2d23" containerName="ovn-controller" probeResult="failure" output=< Oct 06 12:02:03 crc kubenswrapper[4698]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 06 12:02:03 crc kubenswrapper[4698]: > Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.646710 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.664652 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gx9kq" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.846214 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f5be-account-create-w2bq6"] Oct 06 12:02:03 crc kubenswrapper[4698]: W1006 12:02:03.856582 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47bbda08_fa2d_4bad_af25_163aabf96973.slice/crio-576f85d301657a6cc986b52c56c7443a5b5dc7b0ec122680ecf1bdad9c7bb289 WatchSource:0}: Error finding container 576f85d301657a6cc986b52c56c7443a5b5dc7b0ec122680ecf1bdad9c7bb289: Status 404 returned error can't find the container with id 576f85d301657a6cc986b52c56c7443a5b5dc7b0ec122680ecf1bdad9c7bb289 Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.901457 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qmjmg-config-t5r9t"] Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.902718 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.905828 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.915004 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qmjmg-config-t5r9t"] Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.960141 4698 generic.go:334] "Generic (PLEG): container finished" podID="527d6d0f-30f7-4a96-866f-8392b12057b3" containerID="8854bb3a49990269550c6604d1b7d73b8705440bec9e2cd770a8ea77c3f96b70" exitCode=0 Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.960390 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bd25-account-create-hrjlh" event={"ID":"527d6d0f-30f7-4a96-866f-8392b12057b3","Type":"ContainerDied","Data":"8854bb3a49990269550c6604d1b7d73b8705440bec9e2cd770a8ea77c3f96b70"} Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.960453 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bd25-account-create-hrjlh" event={"ID":"527d6d0f-30f7-4a96-866f-8392b12057b3","Type":"ContainerStarted","Data":"ce417d93fca4fef729ffa7d935f9e348b7cf8ab091b59afe61e13a08d2ecfeca"} Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.968157 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f5be-account-create-w2bq6" event={"ID":"47bbda08-fa2d-4bad-af25-163aabf96973","Type":"ContainerStarted","Data":"576f85d301657a6cc986b52c56c7443a5b5dc7b0ec122680ecf1bdad9c7bb289"} Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.970714 4698 generic.go:334] "Generic (PLEG): container finished" podID="272010f5-1745-47b9-bf97-af5335394b6f" containerID="c0e39504d16a559d57fa54eded68236b728159b49e35d6cd36577ea8d4d7f134" exitCode=0 Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.970963 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f541-account-create-hq766" event={"ID":"272010f5-1745-47b9-bf97-af5335394b6f","Type":"ContainerDied","Data":"c0e39504d16a559d57fa54eded68236b728159b49e35d6cd36577ea8d4d7f134"} Oct 06 12:02:03 crc kubenswrapper[4698]: I1006 12:02:03.971055 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f541-account-create-hq766" event={"ID":"272010f5-1745-47b9-bf97-af5335394b6f","Type":"ContainerStarted","Data":"450886e4fa3ced4d816c82eeef4f1f314a25cae246447c492f8fbcf83e116eae"} Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.053880 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run-ovn\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.054270 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c67q7\" (UniqueName: \"kubernetes.io/projected/af2d8677-f2f5-4f19-89ca-957364bcff35-kube-api-access-c67q7\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.054597 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.054730 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-log-ovn\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.054917 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-scripts\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.055098 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-additional-scripts\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.156693 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run-ovn\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.156775 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c67q7\" (UniqueName: \"kubernetes.io/projected/af2d8677-f2f5-4f19-89ca-957364bcff35-kube-api-access-c67q7\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.156884 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.156911 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-log-ovn\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.156973 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-scripts\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.156995 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-additional-scripts\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.157164 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run-ovn\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.157288 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.157468 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-log-ovn\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.158108 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-additional-scripts\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.160461 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-scripts\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.177179 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c67q7\" (UniqueName: \"kubernetes.io/projected/af2d8677-f2f5-4f19-89ca-957364bcff35-kube-api-access-c67q7\") pod \"ovn-controller-qmjmg-config-t5r9t\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.245277 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.521773 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-2c05-account-create-drzlg"] Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.524269 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-2c05-account-create-drzlg" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.527385 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.532632 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-2c05-account-create-drzlg"] Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.669428 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rslw\" (UniqueName: \"kubernetes.io/projected/45a95ffd-2442-4483-990b-5d80a2f84ec2-kube-api-access-9rslw\") pod \"watcher-2c05-account-create-drzlg\" (UID: \"45a95ffd-2442-4483-990b-5d80a2f84ec2\") " pod="openstack/watcher-2c05-account-create-drzlg" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.771441 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rslw\" (UniqueName: \"kubernetes.io/projected/45a95ffd-2442-4483-990b-5d80a2f84ec2-kube-api-access-9rslw\") pod \"watcher-2c05-account-create-drzlg\" (UID: \"45a95ffd-2442-4483-990b-5d80a2f84ec2\") " pod="openstack/watcher-2c05-account-create-drzlg" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.800226 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rslw\" (UniqueName: \"kubernetes.io/projected/45a95ffd-2442-4483-990b-5d80a2f84ec2-kube-api-access-9rslw\") pod \"watcher-2c05-account-create-drzlg\" (UID: \"45a95ffd-2442-4483-990b-5d80a2f84ec2\") " pod="openstack/watcher-2c05-account-create-drzlg" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.803248 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qmjmg-config-t5r9t"] Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.844747 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-2c05-account-create-drzlg" Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.996967 4698 generic.go:334] "Generic (PLEG): container finished" podID="47bbda08-fa2d-4bad-af25-163aabf96973" containerID="c309372a9dc3bd50d5a28e6f00a67596b79e790d3224eed8219074c412f8a55e" exitCode=0 Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.997113 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f5be-account-create-w2bq6" event={"ID":"47bbda08-fa2d-4bad-af25-163aabf96973","Type":"ContainerDied","Data":"c309372a9dc3bd50d5a28e6f00a67596b79e790d3224eed8219074c412f8a55e"} Oct 06 12:02:04 crc kubenswrapper[4698]: I1006 12:02:04.999432 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qmjmg-config-t5r9t" event={"ID":"af2d8677-f2f5-4f19-89ca-957364bcff35","Type":"ContainerStarted","Data":"9521184f9b22e88a8092a1c09f692dd10713e3f9b1b8b3489757c454b7129ac0"} Oct 06 12:02:05 crc kubenswrapper[4698]: I1006 12:02:05.416920 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-2c05-account-create-drzlg"] Oct 06 12:02:05 crc kubenswrapper[4698]: I1006 12:02:05.812524 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:05 crc kubenswrapper[4698]: I1006 12:02:05.818881 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bd25-account-create-hrjlh" Oct 06 12:02:05 crc kubenswrapper[4698]: I1006 12:02:05.820872 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:05 crc kubenswrapper[4698]: I1006 12:02:05.823472 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f541-account-create-hq766" Oct 06 12:02:05 crc kubenswrapper[4698]: I1006 12:02:05.899081 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnd7q\" (UniqueName: \"kubernetes.io/projected/527d6d0f-30f7-4a96-866f-8392b12057b3-kube-api-access-vnd7q\") pod \"527d6d0f-30f7-4a96-866f-8392b12057b3\" (UID: \"527d6d0f-30f7-4a96-866f-8392b12057b3\") " Oct 06 12:02:05 crc kubenswrapper[4698]: I1006 12:02:05.899146 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ncjm\" (UniqueName: \"kubernetes.io/projected/272010f5-1745-47b9-bf97-af5335394b6f-kube-api-access-7ncjm\") pod \"272010f5-1745-47b9-bf97-af5335394b6f\" (UID: \"272010f5-1745-47b9-bf97-af5335394b6f\") " Oct 06 12:02:05 crc kubenswrapper[4698]: I1006 12:02:05.911313 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272010f5-1745-47b9-bf97-af5335394b6f-kube-api-access-7ncjm" (OuterVolumeSpecName: "kube-api-access-7ncjm") pod "272010f5-1745-47b9-bf97-af5335394b6f" (UID: "272010f5-1745-47b9-bf97-af5335394b6f"). InnerVolumeSpecName "kube-api-access-7ncjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:05 crc kubenswrapper[4698]: I1006 12:02:05.911416 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527d6d0f-30f7-4a96-866f-8392b12057b3-kube-api-access-vnd7q" (OuterVolumeSpecName: "kube-api-access-vnd7q") pod "527d6d0f-30f7-4a96-866f-8392b12057b3" (UID: "527d6d0f-30f7-4a96-866f-8392b12057b3"). InnerVolumeSpecName "kube-api-access-vnd7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.001433 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnd7q\" (UniqueName: \"kubernetes.io/projected/527d6d0f-30f7-4a96-866f-8392b12057b3-kube-api-access-vnd7q\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.001476 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ncjm\" (UniqueName: \"kubernetes.io/projected/272010f5-1745-47b9-bf97-af5335394b6f-kube-api-access-7ncjm\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.012122 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-2c05-account-create-drzlg" event={"ID":"45a95ffd-2442-4483-990b-5d80a2f84ec2","Type":"ContainerStarted","Data":"6e85a826cde780433967a0735e27d3decc5dae7fa19da9347ce667974d9c3fcd"} Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.012172 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-2c05-account-create-drzlg" event={"ID":"45a95ffd-2442-4483-990b-5d80a2f84ec2","Type":"ContainerStarted","Data":"875ccc490277b7200353ed08d90b0f1a9222c98c8dcad3e74cf80271dcfed01c"} Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.014251 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f541-account-create-hq766" event={"ID":"272010f5-1745-47b9-bf97-af5335394b6f","Type":"ContainerDied","Data":"450886e4fa3ced4d816c82eeef4f1f314a25cae246447c492f8fbcf83e116eae"} Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.014362 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="450886e4fa3ced4d816c82eeef4f1f314a25cae246447c492f8fbcf83e116eae" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.014274 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f541-account-create-hq766" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.016320 4698 generic.go:334] "Generic (PLEG): container finished" podID="af2d8677-f2f5-4f19-89ca-957364bcff35" containerID="0e016f31b0410320c63583466a7c4587439c6a195568ca82d47576d0b0949f32" exitCode=0 Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.016392 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qmjmg-config-t5r9t" event={"ID":"af2d8677-f2f5-4f19-89ca-957364bcff35","Type":"ContainerDied","Data":"0e016f31b0410320c63583466a7c4587439c6a195568ca82d47576d0b0949f32"} Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.020050 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bd25-account-create-hrjlh" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.025237 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bd25-account-create-hrjlh" event={"ID":"527d6d0f-30f7-4a96-866f-8392b12057b3","Type":"ContainerDied","Data":"ce417d93fca4fef729ffa7d935f9e348b7cf8ab091b59afe61e13a08d2ecfeca"} Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.025291 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce417d93fca4fef729ffa7d935f9e348b7cf8ab091b59afe61e13a08d2ecfeca" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.026648 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.035045 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-2c05-account-create-drzlg" podStartSLOduration=2.035007156 podStartE2EDuration="2.035007156s" podCreationTimestamp="2025-10-06 12:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:02:06.0258638 +0000 UTC m=+1013.438556003" watchObservedRunningTime="2025-10-06 12:02:06.035007156 +0000 UTC m=+1013.447699329" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.420835 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f5be-account-create-w2bq6" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.522537 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pdd4\" (UniqueName: \"kubernetes.io/projected/47bbda08-fa2d-4bad-af25-163aabf96973-kube-api-access-7pdd4\") pod \"47bbda08-fa2d-4bad-af25-163aabf96973\" (UID: \"47bbda08-fa2d-4bad-af25-163aabf96973\") " Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.540426 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47bbda08-fa2d-4bad-af25-163aabf96973-kube-api-access-7pdd4" (OuterVolumeSpecName: "kube-api-access-7pdd4") pod "47bbda08-fa2d-4bad-af25-163aabf96973" (UID: "47bbda08-fa2d-4bad-af25-163aabf96973"). InnerVolumeSpecName "kube-api-access-7pdd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:06 crc kubenswrapper[4698]: I1006 12:02:06.624920 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pdd4\" (UniqueName: \"kubernetes.io/projected/47bbda08-fa2d-4bad-af25-163aabf96973-kube-api-access-7pdd4\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.028979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f5be-account-create-w2bq6" event={"ID":"47bbda08-fa2d-4bad-af25-163aabf96973","Type":"ContainerDied","Data":"576f85d301657a6cc986b52c56c7443a5b5dc7b0ec122680ecf1bdad9c7bb289"} Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.029043 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f5be-account-create-w2bq6" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.029050 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="576f85d301657a6cc986b52c56c7443a5b5dc7b0ec122680ecf1bdad9c7bb289" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.032541 4698 generic.go:334] "Generic (PLEG): container finished" podID="45a95ffd-2442-4483-990b-5d80a2f84ec2" containerID="6e85a826cde780433967a0735e27d3decc5dae7fa19da9347ce667974d9c3fcd" exitCode=0 Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.032598 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-2c05-account-create-drzlg" event={"ID":"45a95ffd-2442-4483-990b-5d80a2f84ec2","Type":"ContainerDied","Data":"6e85a826cde780433967a0735e27d3decc5dae7fa19da9347ce667974d9c3fcd"} Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.430796 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.544873 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c67q7\" (UniqueName: \"kubernetes.io/projected/af2d8677-f2f5-4f19-89ca-957364bcff35-kube-api-access-c67q7\") pod \"af2d8677-f2f5-4f19-89ca-957364bcff35\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545006 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-log-ovn\") pod \"af2d8677-f2f5-4f19-89ca-957364bcff35\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545089 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run-ovn\") pod \"af2d8677-f2f5-4f19-89ca-957364bcff35\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545107 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-scripts\") pod \"af2d8677-f2f5-4f19-89ca-957364bcff35\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545142 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run\") pod \"af2d8677-f2f5-4f19-89ca-957364bcff35\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545180 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "af2d8677-f2f5-4f19-89ca-957364bcff35" (UID: "af2d8677-f2f5-4f19-89ca-957364bcff35"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545253 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-additional-scripts\") pod \"af2d8677-f2f5-4f19-89ca-957364bcff35\" (UID: \"af2d8677-f2f5-4f19-89ca-957364bcff35\") " Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545274 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "af2d8677-f2f5-4f19-89ca-957364bcff35" (UID: "af2d8677-f2f5-4f19-89ca-957364bcff35"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545369 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run" (OuterVolumeSpecName: "var-run") pod "af2d8677-f2f5-4f19-89ca-957364bcff35" (UID: "af2d8677-f2f5-4f19-89ca-957364bcff35"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545609 4698 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545622 4698 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.545635 4698 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af2d8677-f2f5-4f19-89ca-957364bcff35-var-run\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.546205 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "af2d8677-f2f5-4f19-89ca-957364bcff35" (UID: "af2d8677-f2f5-4f19-89ca-957364bcff35"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.546371 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-scripts" (OuterVolumeSpecName: "scripts") pod "af2d8677-f2f5-4f19-89ca-957364bcff35" (UID: "af2d8677-f2f5-4f19-89ca-957364bcff35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.552263 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af2d8677-f2f5-4f19-89ca-957364bcff35-kube-api-access-c67q7" (OuterVolumeSpecName: "kube-api-access-c67q7") pod "af2d8677-f2f5-4f19-89ca-957364bcff35" (UID: "af2d8677-f2f5-4f19-89ca-957364bcff35"). InnerVolumeSpecName "kube-api-access-c67q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.647601 4698 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.647652 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c67q7\" (UniqueName: \"kubernetes.io/projected/af2d8677-f2f5-4f19-89ca-957364bcff35-kube-api-access-c67q7\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.647666 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af2d8677-f2f5-4f19-89ca-957364bcff35-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.749249 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.755575 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/240ac959-0487-47d4-b219-7741b2127f50-etc-swift\") pod \"swift-storage-0\" (UID: \"240ac959-0487-47d4-b219-7741b2127f50\") " pod="openstack/swift-storage-0" Oct 06 12:02:07 crc kubenswrapper[4698]: I1006 12:02:07.845548 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.058230 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qmjmg-config-t5r9t" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.060200 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qmjmg-config-t5r9t" event={"ID":"af2d8677-f2f5-4f19-89ca-957364bcff35","Type":"ContainerDied","Data":"9521184f9b22e88a8092a1c09f692dd10713e3f9b1b8b3489757c454b7129ac0"} Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.060257 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9521184f9b22e88a8092a1c09f692dd10713e3f9b1b8b3489757c454b7129ac0" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.307055 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-r2jp7"] Oct 06 12:02:08 crc kubenswrapper[4698]: E1006 12:02:08.307789 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af2d8677-f2f5-4f19-89ca-957364bcff35" containerName="ovn-config" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.307802 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="af2d8677-f2f5-4f19-89ca-957364bcff35" containerName="ovn-config" Oct 06 12:02:08 crc kubenswrapper[4698]: E1006 12:02:08.307818 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272010f5-1745-47b9-bf97-af5335394b6f" containerName="mariadb-account-create" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.307825 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="272010f5-1745-47b9-bf97-af5335394b6f" containerName="mariadb-account-create" Oct 06 12:02:08 crc kubenswrapper[4698]: E1006 12:02:08.307848 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527d6d0f-30f7-4a96-866f-8392b12057b3" containerName="mariadb-account-create" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.307853 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="527d6d0f-30f7-4a96-866f-8392b12057b3" containerName="mariadb-account-create" Oct 06 12:02:08 crc kubenswrapper[4698]: E1006 12:02:08.307872 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47bbda08-fa2d-4bad-af25-163aabf96973" containerName="mariadb-account-create" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.307878 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="47bbda08-fa2d-4bad-af25-163aabf96973" containerName="mariadb-account-create" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.308142 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="af2d8677-f2f5-4f19-89ca-957364bcff35" containerName="ovn-config" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.308157 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="47bbda08-fa2d-4bad-af25-163aabf96973" containerName="mariadb-account-create" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.308175 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="527d6d0f-30f7-4a96-866f-8392b12057b3" containerName="mariadb-account-create" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.308186 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="272010f5-1745-47b9-bf97-af5335394b6f" containerName="mariadb-account-create" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.308762 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.314381 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.314591 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-47rrc" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.341064 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r2jp7"] Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.360969 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-db-sync-config-data\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.361053 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-config-data\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.361092 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv587\" (UniqueName: \"kubernetes.io/projected/95c6365d-fa8b-4f4e-9683-e021e05882ff-kube-api-access-vv587\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.361114 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-combined-ca-bundle\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.388446 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.462958 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-db-sync-config-data\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.463055 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-config-data\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.463098 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv587\" (UniqueName: \"kubernetes.io/projected/95c6365d-fa8b-4f4e-9683-e021e05882ff-kube-api-access-vv587\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.463122 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-combined-ca-bundle\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.471940 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-db-sync-config-data\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.471974 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-combined-ca-bundle\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.472498 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-config-data\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.490097 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv587\" (UniqueName: \"kubernetes.io/projected/95c6365d-fa8b-4f4e-9683-e021e05882ff-kube-api-access-vv587\") pod \"glance-db-sync-r2jp7\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.596766 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qmjmg-config-t5r9t"] Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.603933 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qmjmg-config-t5r9t"] Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.609472 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-2c05-account-create-drzlg" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.634034 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.639952 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qmjmg" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.685648 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rslw\" (UniqueName: \"kubernetes.io/projected/45a95ffd-2442-4483-990b-5d80a2f84ec2-kube-api-access-9rslw\") pod \"45a95ffd-2442-4483-990b-5d80a2f84ec2\" (UID: \"45a95ffd-2442-4483-990b-5d80a2f84ec2\") " Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.694178 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a95ffd-2442-4483-990b-5d80a2f84ec2-kube-api-access-9rslw" (OuterVolumeSpecName: "kube-api-access-9rslw") pod "45a95ffd-2442-4483-990b-5d80a2f84ec2" (UID: "45a95ffd-2442-4483-990b-5d80a2f84ec2"). InnerVolumeSpecName "kube-api-access-9rslw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:08 crc kubenswrapper[4698]: I1006 12:02:08.788178 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rslw\" (UniqueName: \"kubernetes.io/projected/45a95ffd-2442-4483-990b-5d80a2f84ec2-kube-api-access-9rslw\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.051518 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.076646 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"7d4808cd0998194f49d154aaf45b991f6fb58cd8f1ae728f4026d45d9b7400c0"} Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.098304 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-2c05-account-create-drzlg" event={"ID":"45a95ffd-2442-4483-990b-5d80a2f84ec2","Type":"ContainerDied","Data":"875ccc490277b7200353ed08d90b0f1a9222c98c8dcad3e74cf80271dcfed01c"} Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.098344 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="875ccc490277b7200353ed08d90b0f1a9222c98c8dcad3e74cf80271dcfed01c" Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.098403 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-2c05-account-create-drzlg" Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.283851 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r2jp7"] Oct 06 12:02:09 crc kubenswrapper[4698]: W1006 12:02:09.294349 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95c6365d_fa8b_4f4e_9683_e021e05882ff.slice/crio-6a31a74abc83c53250761fc5270f5ea4ae18513447c21e037bf4bf6250dc8ea8 WatchSource:0}: Error finding container 6a31a74abc83c53250761fc5270f5ea4ae18513447c21e037bf4bf6250dc8ea8: Status 404 returned error can't find the container with id 6a31a74abc83c53250761fc5270f5ea4ae18513447c21e037bf4bf6250dc8ea8 Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.342307 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af2d8677-f2f5-4f19-89ca-957364bcff35" path="/var/lib/kubelet/pods/af2d8677-f2f5-4f19-89ca-957364bcff35/volumes" Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.432243 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.977908 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.985081 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="prometheus" containerID="cri-o://63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905" gracePeriod=600 Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.985246 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="thanos-sidecar" containerID="cri-o://a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f" gracePeriod=600 Oct 06 12:02:09 crc kubenswrapper[4698]: I1006 12:02:09.985300 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="config-reloader" containerID="cri-o://88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5" gracePeriod=600 Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.113966 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r2jp7" event={"ID":"95c6365d-fa8b-4f4e-9683-e021e05882ff","Type":"ContainerStarted","Data":"6a31a74abc83c53250761fc5270f5ea4ae18513447c21e037bf4bf6250dc8ea8"} Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.849121 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.962627 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-web-config\") pod \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.962696 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config\") pod \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.962819 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-thanos-prometheus-http-client-file\") pod \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.962948 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.963006 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aca88314-f6aa-4d15-8c81-2a4c66d4297f-prometheus-metric-storage-rulefiles-0\") pod \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.963059 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-tls-assets\") pod \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.963084 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config-out\") pod \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.963118 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmjtd\" (UniqueName: \"kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-kube-api-access-kmjtd\") pod \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\" (UID: \"aca88314-f6aa-4d15-8c81-2a4c66d4297f\") " Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.966619 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca88314-f6aa-4d15-8c81-2a4c66d4297f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "aca88314-f6aa-4d15-8c81-2a4c66d4297f" (UID: "aca88314-f6aa-4d15-8c81-2a4c66d4297f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.973455 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-kube-api-access-kmjtd" (OuterVolumeSpecName: "kube-api-access-kmjtd") pod "aca88314-f6aa-4d15-8c81-2a4c66d4297f" (UID: "aca88314-f6aa-4d15-8c81-2a4c66d4297f"). InnerVolumeSpecName "kube-api-access-kmjtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.973656 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config-out" (OuterVolumeSpecName: "config-out") pod "aca88314-f6aa-4d15-8c81-2a4c66d4297f" (UID: "aca88314-f6aa-4d15-8c81-2a4c66d4297f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.975366 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config" (OuterVolumeSpecName: "config") pod "aca88314-f6aa-4d15-8c81-2a4c66d4297f" (UID: "aca88314-f6aa-4d15-8c81-2a4c66d4297f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.975534 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "aca88314-f6aa-4d15-8c81-2a4c66d4297f" (UID: "aca88314-f6aa-4d15-8c81-2a4c66d4297f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.978414 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "aca88314-f6aa-4d15-8c81-2a4c66d4297f" (UID: "aca88314-f6aa-4d15-8c81-2a4c66d4297f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.997199 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-web-config" (OuterVolumeSpecName: "web-config") pod "aca88314-f6aa-4d15-8c81-2a4c66d4297f" (UID: "aca88314-f6aa-4d15-8c81-2a4c66d4297f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:10 crc kubenswrapper[4698]: I1006 12:02:10.998534 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "aca88314-f6aa-4d15-8c81-2a4c66d4297f" (UID: "aca88314-f6aa-4d15-8c81-2a4c66d4297f"). InnerVolumeSpecName "pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.065099 4698 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-web-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.065581 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.065595 4698 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aca88314-f6aa-4d15-8c81-2a4c66d4297f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.065653 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") on node \"crc\" " Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.065669 4698 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aca88314-f6aa-4d15-8c81-2a4c66d4297f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.065680 4698 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-tls-assets\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.065690 4698 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aca88314-f6aa-4d15-8c81-2a4c66d4297f-config-out\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.065699 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmjtd\" (UniqueName: \"kubernetes.io/projected/aca88314-f6aa-4d15-8c81-2a4c66d4297f-kube-api-access-kmjtd\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.098906 4698 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.099254 4698 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0") on node "crc" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.126418 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"beb40d4d28d15cf710278557b25500c56a3b3d2a7e0875e6553d4da29d3cb3e5"} Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.126474 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"1eb618985e9aa5f6e97e3306079ee43f1c8060ef8749fe050115e236e8717209"} Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.131258 4698 generic.go:334] "Generic (PLEG): container finished" podID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerID="a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f" exitCode=0 Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.131305 4698 generic.go:334] "Generic (PLEG): container finished" podID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerID="88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5" exitCode=0 Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.131316 4698 generic.go:334] "Generic (PLEG): container finished" podID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerID="63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905" exitCode=0 Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.131347 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aca88314-f6aa-4d15-8c81-2a4c66d4297f","Type":"ContainerDied","Data":"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f"} Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.131384 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aca88314-f6aa-4d15-8c81-2a4c66d4297f","Type":"ContainerDied","Data":"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5"} Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.131417 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aca88314-f6aa-4d15-8c81-2a4c66d4297f","Type":"ContainerDied","Data":"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905"} Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.131427 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aca88314-f6aa-4d15-8c81-2a4c66d4297f","Type":"ContainerDied","Data":"52e72b787168ab03161ebe0da87bb1efe2a1a61507ffccb51a57527eec67d928"} Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.131449 4698 scope.go:117] "RemoveContainer" containerID="a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.131613 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.167450 4698 reconciler_common.go:293] "Volume detached for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.179841 4698 scope.go:117] "RemoveContainer" containerID="88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.181406 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.188586 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.229540 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:02:11 crc kubenswrapper[4698]: E1006 12:02:11.230043 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a95ffd-2442-4483-990b-5d80a2f84ec2" containerName="mariadb-account-create" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.230058 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a95ffd-2442-4483-990b-5d80a2f84ec2" containerName="mariadb-account-create" Oct 06 12:02:11 crc kubenswrapper[4698]: E1006 12:02:11.230086 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="config-reloader" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.230092 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="config-reloader" Oct 06 12:02:11 crc kubenswrapper[4698]: E1006 12:02:11.230107 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="init-config-reloader" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.230114 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="init-config-reloader" Oct 06 12:02:11 crc kubenswrapper[4698]: E1006 12:02:11.230128 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="thanos-sidecar" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.230134 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="thanos-sidecar" Oct 06 12:02:11 crc kubenswrapper[4698]: E1006 12:02:11.230146 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="prometheus" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.230151 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="prometheus" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.230338 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="thanos-sidecar" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.230354 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="prometheus" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.230363 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a95ffd-2442-4483-990b-5d80a2f84ec2" containerName="mariadb-account-create" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.230376 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="config-reloader" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.232248 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.238549 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.245632 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.247402 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.249397 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-plfqm" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.249719 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.257417 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.260180 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.280973 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.312048 4698 scope.go:117] "RemoveContainer" containerID="63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.366377 4698 scope.go:117] "RemoveContainer" containerID="83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.400593 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" path="/var/lib/kubelet/pods/aca88314-f6aa-4d15-8c81-2a4c66d4297f/volumes" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409268 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8778x\" (UniqueName: \"kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-kube-api-access-8778x\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409321 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409355 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40388b3e-433c-484a-b0aa-c7e427601657-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409376 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/40388b3e-433c-484a-b0aa-c7e427601657-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409402 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409439 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409458 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-config\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409475 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409494 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409513 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.409541 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.467437 4698 scope.go:117] "RemoveContainer" containerID="a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f" Oct 06 12:02:11 crc kubenswrapper[4698]: E1006 12:02:11.471620 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f\": container with ID starting with a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f not found: ID does not exist" containerID="a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.471670 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f"} err="failed to get container status \"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f\": rpc error: code = NotFound desc = could not find container \"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f\": container with ID starting with a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.471698 4698 scope.go:117] "RemoveContainer" containerID="88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5" Oct 06 12:02:11 crc kubenswrapper[4698]: E1006 12:02:11.473551 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5\": container with ID starting with 88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5 not found: ID does not exist" containerID="88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.473583 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5"} err="failed to get container status \"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5\": rpc error: code = NotFound desc = could not find container \"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5\": container with ID starting with 88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5 not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.473604 4698 scope.go:117] "RemoveContainer" containerID="63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905" Oct 06 12:02:11 crc kubenswrapper[4698]: E1006 12:02:11.475283 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905\": container with ID starting with 63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905 not found: ID does not exist" containerID="63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.475307 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905"} err="failed to get container status \"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905\": rpc error: code = NotFound desc = could not find container \"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905\": container with ID starting with 63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905 not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.475323 4698 scope.go:117] "RemoveContainer" containerID="83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b" Oct 06 12:02:11 crc kubenswrapper[4698]: E1006 12:02:11.475744 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b\": container with ID starting with 83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b not found: ID does not exist" containerID="83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.475797 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b"} err="failed to get container status \"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b\": rpc error: code = NotFound desc = could not find container \"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b\": container with ID starting with 83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.475840 4698 scope.go:117] "RemoveContainer" containerID="a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.482628 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f"} err="failed to get container status \"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f\": rpc error: code = NotFound desc = could not find container \"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f\": container with ID starting with a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.482678 4698 scope.go:117] "RemoveContainer" containerID="88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.500632 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5"} err="failed to get container status \"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5\": rpc error: code = NotFound desc = could not find container \"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5\": container with ID starting with 88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5 not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.500720 4698 scope.go:117] "RemoveContainer" containerID="63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.504250 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905"} err="failed to get container status \"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905\": rpc error: code = NotFound desc = could not find container \"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905\": container with ID starting with 63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905 not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.504317 4698 scope.go:117] "RemoveContainer" containerID="83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.507570 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b"} err="failed to get container status \"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b\": rpc error: code = NotFound desc = could not find container \"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b\": container with ID starting with 83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.507605 4698 scope.go:117] "RemoveContainer" containerID="a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.511536 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f"} err="failed to get container status \"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f\": rpc error: code = NotFound desc = could not find container \"a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f\": container with ID starting with a1c6bc6aadd86a82dc05142fe9a8c40d23512a0c066df51164aef0e20745a02f not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.511583 4698 scope.go:117] "RemoveContainer" containerID="88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.512720 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5"} err="failed to get container status \"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5\": rpc error: code = NotFound desc = could not find container \"88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5\": container with ID starting with 88c1ecb2ef18a674da1d62a3796d00232b6d0fdeed16127b2ba0251679c4fcb5 not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.512781 4698 scope.go:117] "RemoveContainer" containerID="63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.512860 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8778x\" (UniqueName: \"kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-kube-api-access-8778x\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.512901 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.512943 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40388b3e-433c-484a-b0aa-c7e427601657-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.512966 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/40388b3e-433c-484a-b0aa-c7e427601657-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.513002 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.513077 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.513105 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-config\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.513121 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.513141 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.513158 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.513191 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.514042 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905"} err="failed to get container status \"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905\": rpc error: code = NotFound desc = could not find container \"63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905\": container with ID starting with 63ff43d1c7485a531d5ccb7eb5b9c0a92d26023b96bd9688c4ad5311f03d5905 not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.514067 4698 scope.go:117] "RemoveContainer" containerID="83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.515193 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/40388b3e-433c-484a-b0aa-c7e427601657-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.520987 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b"} err="failed to get container status \"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b\": rpc error: code = NotFound desc = could not find container \"83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b\": container with ID starting with 83a3ee8055cb120ccff4789481e7de832edfd1a6083ac0fd2a948b438f64a38b not found: ID does not exist" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.528887 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.547851 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-config\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.548092 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40388b3e-433c-484a-b0aa-c7e427601657-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.548564 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.548685 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.548886 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.551914 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.556279 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.556326 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/44d562176676e8d8573ee8cf6c79a771697a87ae1dbf01fea0ea1f08f9081a45/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.561230 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.601024 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8778x\" (UniqueName: \"kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-kube-api-access-8778x\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.687426 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.780115 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-4xb2s"] Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.781621 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.784769 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.785047 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-cw6dm" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.814448 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-4xb2s"] Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.890408 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.890759 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mx92t"] Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.891897 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mx92t" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.927155 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-combined-ca-bundle\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.927233 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-db-sync-config-data\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.927280 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9lb\" (UniqueName: \"kubernetes.io/projected/c922a4e8-475c-438a-88d9-8d33f597fda6-kube-api-access-sq9lb\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.927335 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-config-data\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:11 crc kubenswrapper[4698]: I1006 12:02:11.949941 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mx92t"] Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.029059 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-combined-ca-bundle\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.029132 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-db-sync-config-data\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.029190 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq9lb\" (UniqueName: \"kubernetes.io/projected/c922a4e8-475c-438a-88d9-8d33f597fda6-kube-api-access-sq9lb\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.029276 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxgr\" (UniqueName: \"kubernetes.io/projected/8ca60d6f-56ad-4cc4-971d-458cd6f5aad0-kube-api-access-blxgr\") pod \"cinder-db-create-mx92t\" (UID: \"8ca60d6f-56ad-4cc4-971d-458cd6f5aad0\") " pod="openstack/cinder-db-create-mx92t" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.029304 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-config-data\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.038386 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-config-data\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.040046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-db-sync-config-data\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.042182 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-combined-ca-bundle\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.071899 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-spzn5"] Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.073433 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-spzn5" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.080633 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq9lb\" (UniqueName: \"kubernetes.io/projected/c922a4e8-475c-438a-88d9-8d33f597fda6-kube-api-access-sq9lb\") pod \"watcher-db-sync-4xb2s\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.083555 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-spzn5"] Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.105805 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.136663 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blxgr\" (UniqueName: \"kubernetes.io/projected/8ca60d6f-56ad-4cc4-971d-458cd6f5aad0-kube-api-access-blxgr\") pod \"cinder-db-create-mx92t\" (UID: \"8ca60d6f-56ad-4cc4-971d-458cd6f5aad0\") " pod="openstack/cinder-db-create-mx92t" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.200267 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blxgr\" (UniqueName: \"kubernetes.io/projected/8ca60d6f-56ad-4cc4-971d-458cd6f5aad0-kube-api-access-blxgr\") pod \"cinder-db-create-mx92t\" (UID: \"8ca60d6f-56ad-4cc4-971d-458cd6f5aad0\") " pod="openstack/cinder-db-create-mx92t" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.202387 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"66469a055fa6094227dc0bba0fc11523e13c2e4a928ae5d3d31160a651cc13b8"} Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.202441 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"1c926d73b0f320c6ee227e91546cf769bfd3552c01bcb4792e354d3d862ec9e2"} Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.232389 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mx92t" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.252527 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gjpd\" (UniqueName: \"kubernetes.io/projected/2d8adfa5-ff54-4436-ae26-3a1723c0692d-kube-api-access-7gjpd\") pod \"barbican-db-create-spzn5\" (UID: \"2d8adfa5-ff54-4436-ae26-3a1723c0692d\") " pod="openstack/barbican-db-create-spzn5" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.299548 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-cmcgd"] Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.306092 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.314685 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.331684 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dw6fp" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.331902 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.332054 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.350689 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cmcgd"] Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.354617 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gjpd\" (UniqueName: \"kubernetes.io/projected/2d8adfa5-ff54-4436-ae26-3a1723c0692d-kube-api-access-7gjpd\") pod \"barbican-db-create-spzn5\" (UID: \"2d8adfa5-ff54-4436-ae26-3a1723c0692d\") " pod="openstack/barbican-db-create-spzn5" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.426845 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gjpd\" (UniqueName: \"kubernetes.io/projected/2d8adfa5-ff54-4436-ae26-3a1723c0692d-kube-api-access-7gjpd\") pod \"barbican-db-create-spzn5\" (UID: \"2d8adfa5-ff54-4436-ae26-3a1723c0692d\") " pod="openstack/barbican-db-create-spzn5" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.431161 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-spzn5" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.441075 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9grl8"] Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.442307 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9grl8" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.471606 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-combined-ca-bundle\") pod \"keystone-db-sync-cmcgd\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.471670 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-config-data\") pod \"keystone-db-sync-cmcgd\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.471749 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f8kj\" (UniqueName: \"kubernetes.io/projected/a181ed38-72eb-491d-b195-c52e4167bac6-kube-api-access-5f8kj\") pod \"keystone-db-sync-cmcgd\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.473454 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9grl8"] Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.578646 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f8kj\" (UniqueName: \"kubernetes.io/projected/a181ed38-72eb-491d-b195-c52e4167bac6-kube-api-access-5f8kj\") pod \"keystone-db-sync-cmcgd\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.578752 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2cgg\" (UniqueName: \"kubernetes.io/projected/83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c-kube-api-access-h2cgg\") pod \"neutron-db-create-9grl8\" (UID: \"83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c\") " pod="openstack/neutron-db-create-9grl8" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.578805 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-combined-ca-bundle\") pod \"keystone-db-sync-cmcgd\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.578838 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-config-data\") pod \"keystone-db-sync-cmcgd\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.588063 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-config-data\") pod \"keystone-db-sync-cmcgd\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.602947 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-combined-ca-bundle\") pod \"keystone-db-sync-cmcgd\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.644133 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f8kj\" (UniqueName: \"kubernetes.io/projected/a181ed38-72eb-491d-b195-c52e4167bac6-kube-api-access-5f8kj\") pod \"keystone-db-sync-cmcgd\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.681122 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2cgg\" (UniqueName: \"kubernetes.io/projected/83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c-kube-api-access-h2cgg\") pod \"neutron-db-create-9grl8\" (UID: \"83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c\") " pod="openstack/neutron-db-create-9grl8" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.706539 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.718906 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2cgg\" (UniqueName: \"kubernetes.io/projected/83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c-kube-api-access-h2cgg\") pod \"neutron-db-create-9grl8\" (UID: \"83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c\") " pod="openstack/neutron-db-create-9grl8" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.815796 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.954626 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9grl8" Oct 06 12:02:12 crc kubenswrapper[4698]: I1006 12:02:12.992026 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mx92t"] Oct 06 12:02:13 crc kubenswrapper[4698]: I1006 12:02:13.088281 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-4xb2s"] Oct 06 12:02:13 crc kubenswrapper[4698]: W1006 12:02:13.098441 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc922a4e8_475c_438a_88d9_8d33f597fda6.slice/crio-d5cf1cc7851786aad8814a4ed5d9fd7761568cee92d924781b7c41ba31ac893f WatchSource:0}: Error finding container d5cf1cc7851786aad8814a4ed5d9fd7761568cee92d924781b7c41ba31ac893f: Status 404 returned error can't find the container with id d5cf1cc7851786aad8814a4ed5d9fd7761568cee92d924781b7c41ba31ac893f Oct 06 12:02:13 crc kubenswrapper[4698]: I1006 12:02:13.192994 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cmcgd"] Oct 06 12:02:13 crc kubenswrapper[4698]: I1006 12:02:13.278959 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cmcgd" event={"ID":"a181ed38-72eb-491d-b195-c52e4167bac6","Type":"ContainerStarted","Data":"0591bf729285e334e7a3ec3e3b34f3ce4019448eab22b0b2e2a6a26c1bf03d9b"} Oct 06 12:02:13 crc kubenswrapper[4698]: I1006 12:02:13.306816 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40388b3e-433c-484a-b0aa-c7e427601657","Type":"ContainerStarted","Data":"b6bb40befec29c115adbb0ae40163fe414800721827817dde5fc42ee96d60b55"} Oct 06 12:02:13 crc kubenswrapper[4698]: I1006 12:02:13.306877 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-spzn5"] Oct 06 12:02:13 crc kubenswrapper[4698]: I1006 12:02:13.322010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4xb2s" event={"ID":"c922a4e8-475c-438a-88d9-8d33f597fda6","Type":"ContainerStarted","Data":"d5cf1cc7851786aad8814a4ed5d9fd7761568cee92d924781b7c41ba31ac893f"} Oct 06 12:02:13 crc kubenswrapper[4698]: I1006 12:02:13.328362 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mx92t" event={"ID":"8ca60d6f-56ad-4cc4-971d-458cd6f5aad0","Type":"ContainerStarted","Data":"469e7112fc72459278ac5dfee628ecfd0b32b55b4b689a4da154fe889e3cf1d0"} Oct 06 12:02:13 crc kubenswrapper[4698]: I1006 12:02:13.602977 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9grl8"] Oct 06 12:02:13 crc kubenswrapper[4698]: W1006 12:02:13.615792 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83f4f2c6_4690_44cc_9fa1_0c6ccce1f95c.slice/crio-360ccf2226ab9bfc29a6fa510e9aba645a049b77d0bab60fd2358196773dfd37 WatchSource:0}: Error finding container 360ccf2226ab9bfc29a6fa510e9aba645a049b77d0bab60fd2358196773dfd37: Status 404 returned error can't find the container with id 360ccf2226ab9bfc29a6fa510e9aba645a049b77d0bab60fd2358196773dfd37 Oct 06 12:02:13 crc kubenswrapper[4698]: I1006 12:02:13.796104 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="aca88314-f6aa-4d15-8c81-2a4c66d4297f" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 12:02:14 crc kubenswrapper[4698]: I1006 12:02:14.351448 4698 generic.go:334] "Generic (PLEG): container finished" podID="83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c" containerID="f592bcf9f164e82cc76d5bfd5d50cd07c671bd9bd9d315e2c0015d7d58f9d385" exitCode=0 Oct 06 12:02:14 crc kubenswrapper[4698]: I1006 12:02:14.352082 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9grl8" event={"ID":"83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c","Type":"ContainerDied","Data":"f592bcf9f164e82cc76d5bfd5d50cd07c671bd9bd9d315e2c0015d7d58f9d385"} Oct 06 12:02:14 crc kubenswrapper[4698]: I1006 12:02:14.352851 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9grl8" event={"ID":"83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c","Type":"ContainerStarted","Data":"360ccf2226ab9bfc29a6fa510e9aba645a049b77d0bab60fd2358196773dfd37"} Oct 06 12:02:14 crc kubenswrapper[4698]: I1006 12:02:14.355957 4698 generic.go:334] "Generic (PLEG): container finished" podID="2d8adfa5-ff54-4436-ae26-3a1723c0692d" containerID="3297048c281d173a3252495013432d934a4073922e7765fb39b83229d979c48c" exitCode=0 Oct 06 12:02:14 crc kubenswrapper[4698]: I1006 12:02:14.356337 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-spzn5" event={"ID":"2d8adfa5-ff54-4436-ae26-3a1723c0692d","Type":"ContainerDied","Data":"3297048c281d173a3252495013432d934a4073922e7765fb39b83229d979c48c"} Oct 06 12:02:14 crc kubenswrapper[4698]: I1006 12:02:14.356390 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-spzn5" event={"ID":"2d8adfa5-ff54-4436-ae26-3a1723c0692d","Type":"ContainerStarted","Data":"8ce29095bebc4cb171de483ddf9249f58b555b26259548a8afc00f60adc81143"} Oct 06 12:02:14 crc kubenswrapper[4698]: I1006 12:02:14.363770 4698 generic.go:334] "Generic (PLEG): container finished" podID="8ca60d6f-56ad-4cc4-971d-458cd6f5aad0" containerID="907b6f96df04aabdd06329c8ee0d53069e95a66e02272a176afacab0a4fd41d9" exitCode=0 Oct 06 12:02:14 crc kubenswrapper[4698]: I1006 12:02:14.363807 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mx92t" event={"ID":"8ca60d6f-56ad-4cc4-971d-458cd6f5aad0","Type":"ContainerDied","Data":"907b6f96df04aabdd06329c8ee0d53069e95a66e02272a176afacab0a4fd41d9"} Oct 06 12:02:16 crc kubenswrapper[4698]: I1006 12:02:16.412818 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40388b3e-433c-484a-b0aa-c7e427601657","Type":"ContainerStarted","Data":"faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2"} Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.319037 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-spzn5" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.323841 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9grl8" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.333826 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mx92t" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.477087 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2cgg\" (UniqueName: \"kubernetes.io/projected/83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c-kube-api-access-h2cgg\") pod \"83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c\" (UID: \"83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c\") " Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.477149 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gjpd\" (UniqueName: \"kubernetes.io/projected/2d8adfa5-ff54-4436-ae26-3a1723c0692d-kube-api-access-7gjpd\") pod \"2d8adfa5-ff54-4436-ae26-3a1723c0692d\" (UID: \"2d8adfa5-ff54-4436-ae26-3a1723c0692d\") " Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.477210 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blxgr\" (UniqueName: \"kubernetes.io/projected/8ca60d6f-56ad-4cc4-971d-458cd6f5aad0-kube-api-access-blxgr\") pod \"8ca60d6f-56ad-4cc4-971d-458cd6f5aad0\" (UID: \"8ca60d6f-56ad-4cc4-971d-458cd6f5aad0\") " Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.484167 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c-kube-api-access-h2cgg" (OuterVolumeSpecName: "kube-api-access-h2cgg") pod "83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c" (UID: "83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c"). InnerVolumeSpecName "kube-api-access-h2cgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.486410 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8adfa5-ff54-4436-ae26-3a1723c0692d-kube-api-access-7gjpd" (OuterVolumeSpecName: "kube-api-access-7gjpd") pod "2d8adfa5-ff54-4436-ae26-3a1723c0692d" (UID: "2d8adfa5-ff54-4436-ae26-3a1723c0692d"). InnerVolumeSpecName "kube-api-access-7gjpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.493848 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca60d6f-56ad-4cc4-971d-458cd6f5aad0-kube-api-access-blxgr" (OuterVolumeSpecName: "kube-api-access-blxgr") pod "8ca60d6f-56ad-4cc4-971d-458cd6f5aad0" (UID: "8ca60d6f-56ad-4cc4-971d-458cd6f5aad0"). InnerVolumeSpecName "kube-api-access-blxgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.537674 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mx92t" event={"ID":"8ca60d6f-56ad-4cc4-971d-458cd6f5aad0","Type":"ContainerDied","Data":"469e7112fc72459278ac5dfee628ecfd0b32b55b4b689a4da154fe889e3cf1d0"} Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.537735 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="469e7112fc72459278ac5dfee628ecfd0b32b55b4b689a4da154fe889e3cf1d0" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.537697 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mx92t" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.539940 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9grl8" event={"ID":"83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c","Type":"ContainerDied","Data":"360ccf2226ab9bfc29a6fa510e9aba645a049b77d0bab60fd2358196773dfd37"} Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.540041 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="360ccf2226ab9bfc29a6fa510e9aba645a049b77d0bab60fd2358196773dfd37" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.540100 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9grl8" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.550433 4698 generic.go:334] "Generic (PLEG): container finished" podID="40388b3e-433c-484a-b0aa-c7e427601657" containerID="faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2" exitCode=0 Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.550512 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40388b3e-433c-484a-b0aa-c7e427601657","Type":"ContainerDied","Data":"faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2"} Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.553554 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-spzn5" event={"ID":"2d8adfa5-ff54-4436-ae26-3a1723c0692d","Type":"ContainerDied","Data":"8ce29095bebc4cb171de483ddf9249f58b555b26259548a8afc00f60adc81143"} Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.553622 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce29095bebc4cb171de483ddf9249f58b555b26259548a8afc00f60adc81143" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.553630 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-spzn5" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.581305 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gjpd\" (UniqueName: \"kubernetes.io/projected/2d8adfa5-ff54-4436-ae26-3a1723c0692d-kube-api-access-7gjpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.581349 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blxgr\" (UniqueName: \"kubernetes.io/projected/8ca60d6f-56ad-4cc4-971d-458cd6f5aad0-kube-api-access-blxgr\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:24 crc kubenswrapper[4698]: I1006 12:02:24.581360 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2cgg\" (UniqueName: \"kubernetes.io/projected/83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c-kube-api-access-h2cgg\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.894295 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3084-account-create-wn4b7"] Oct 06 12:02:31 crc kubenswrapper[4698]: E1006 12:02:31.895835 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8adfa5-ff54-4436-ae26-3a1723c0692d" containerName="mariadb-database-create" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.895854 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8adfa5-ff54-4436-ae26-3a1723c0692d" containerName="mariadb-database-create" Oct 06 12:02:31 crc kubenswrapper[4698]: E1006 12:02:31.895896 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca60d6f-56ad-4cc4-971d-458cd6f5aad0" containerName="mariadb-database-create" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.895908 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca60d6f-56ad-4cc4-971d-458cd6f5aad0" containerName="mariadb-database-create" Oct 06 12:02:31 crc kubenswrapper[4698]: E1006 12:02:31.895938 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c" containerName="mariadb-database-create" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.896212 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c" containerName="mariadb-database-create" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.896480 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca60d6f-56ad-4cc4-971d-458cd6f5aad0" containerName="mariadb-database-create" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.896502 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c" containerName="mariadb-database-create" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.896523 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8adfa5-ff54-4436-ae26-3a1723c0692d" containerName="mariadb-database-create" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.897387 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3084-account-create-wn4b7" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.899917 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.904528 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3084-account-create-wn4b7"] Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.971603 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-55ab-account-create-flpff"] Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.972959 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-55ab-account-create-flpff" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.974603 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29wjr\" (UniqueName: \"kubernetes.io/projected/6d39ea82-8ea0-4df9-97ba-10ae91856a58-kube-api-access-29wjr\") pod \"cinder-3084-account-create-wn4b7\" (UID: \"6d39ea82-8ea0-4df9-97ba-10ae91856a58\") " pod="openstack/cinder-3084-account-create-wn4b7" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.978199 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 06 12:02:31 crc kubenswrapper[4698]: I1006 12:02:31.983644 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-55ab-account-create-flpff"] Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.077220 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29wjr\" (UniqueName: \"kubernetes.io/projected/6d39ea82-8ea0-4df9-97ba-10ae91856a58-kube-api-access-29wjr\") pod \"cinder-3084-account-create-wn4b7\" (UID: \"6d39ea82-8ea0-4df9-97ba-10ae91856a58\") " pod="openstack/cinder-3084-account-create-wn4b7" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.077360 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfqmq\" (UniqueName: \"kubernetes.io/projected/6cd003f6-4f1f-417c-b4cf-412d9c06cb3c-kube-api-access-vfqmq\") pod \"barbican-55ab-account-create-flpff\" (UID: \"6cd003f6-4f1f-417c-b4cf-412d9c06cb3c\") " pod="openstack/barbican-55ab-account-create-flpff" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.103363 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29wjr\" (UniqueName: \"kubernetes.io/projected/6d39ea82-8ea0-4df9-97ba-10ae91856a58-kube-api-access-29wjr\") pod \"cinder-3084-account-create-wn4b7\" (UID: \"6d39ea82-8ea0-4df9-97ba-10ae91856a58\") " pod="openstack/cinder-3084-account-create-wn4b7" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.178070 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfqmq\" (UniqueName: \"kubernetes.io/projected/6cd003f6-4f1f-417c-b4cf-412d9c06cb3c-kube-api-access-vfqmq\") pod \"barbican-55ab-account-create-flpff\" (UID: \"6cd003f6-4f1f-417c-b4cf-412d9c06cb3c\") " pod="openstack/barbican-55ab-account-create-flpff" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.215405 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-645e-account-create-kb64v"] Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.216885 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645e-account-create-kb64v" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.230099 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfqmq\" (UniqueName: \"kubernetes.io/projected/6cd003f6-4f1f-417c-b4cf-412d9c06cb3c-kube-api-access-vfqmq\") pod \"barbican-55ab-account-create-flpff\" (UID: \"6cd003f6-4f1f-417c-b4cf-412d9c06cb3c\") " pod="openstack/barbican-55ab-account-create-flpff" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.260187 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.261158 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3084-account-create-wn4b7" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.264446 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-645e-account-create-kb64v"] Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.294054 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-55ab-account-create-flpff" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.382643 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-545rj\" (UniqueName: \"kubernetes.io/projected/ad0c1e41-ba16-4e0b-968d-96f9cf129d89-kube-api-access-545rj\") pod \"neutron-645e-account-create-kb64v\" (UID: \"ad0c1e41-ba16-4e0b-968d-96f9cf129d89\") " pod="openstack/neutron-645e-account-create-kb64v" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.485665 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-545rj\" (UniqueName: \"kubernetes.io/projected/ad0c1e41-ba16-4e0b-968d-96f9cf129d89-kube-api-access-545rj\") pod \"neutron-645e-account-create-kb64v\" (UID: \"ad0c1e41-ba16-4e0b-968d-96f9cf129d89\") " pod="openstack/neutron-645e-account-create-kb64v" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.519697 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-545rj\" (UniqueName: \"kubernetes.io/projected/ad0c1e41-ba16-4e0b-968d-96f9cf129d89-kube-api-access-545rj\") pod \"neutron-645e-account-create-kb64v\" (UID: \"ad0c1e41-ba16-4e0b-968d-96f9cf129d89\") " pod="openstack/neutron-645e-account-create-kb64v" Oct 06 12:02:32 crc kubenswrapper[4698]: I1006 12:02:32.592832 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645e-account-create-kb64v" Oct 06 12:02:33 crc kubenswrapper[4698]: E1006 12:02:33.878298 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Oct 06 12:02:33 crc kubenswrapper[4698]: E1006 12:02:33.880029 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vv587,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-r2jp7_openstack(95c6365d-fa8b-4f4e-9683-e021e05882ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:02:33 crc kubenswrapper[4698]: E1006 12:02:33.881552 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-r2jp7" podUID="95c6365d-fa8b-4f4e-9683-e021e05882ff" Oct 06 12:02:34 crc kubenswrapper[4698]: E1006 12:02:34.521315 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.75:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Oct 06 12:02:34 crc kubenswrapper[4698]: E1006 12:02:34.521403 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.75:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Oct 06 12:02:34 crc kubenswrapper[4698]: E1006 12:02:34.521614 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.75:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq9lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-4xb2s_openstack(c922a4e8-475c-438a-88d9-8d33f597fda6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:02:34 crc kubenswrapper[4698]: E1006 12:02:34.523300 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-4xb2s" podUID="c922a4e8-475c-438a-88d9-8d33f597fda6" Oct 06 12:02:34 crc kubenswrapper[4698]: E1006 12:02:34.700998 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.75:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-4xb2s" podUID="c922a4e8-475c-438a-88d9-8d33f597fda6" Oct 06 12:02:34 crc kubenswrapper[4698]: E1006 12:02:34.701078 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-r2jp7" podUID="95c6365d-fa8b-4f4e-9683-e021e05882ff" Oct 06 12:02:37 crc kubenswrapper[4698]: E1006 12:02:37.871796 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Oct 06 12:02:37 crc kubenswrapper[4698]: E1006 12:02:37.872498 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5f8kj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-cmcgd_openstack(a181ed38-72eb-491d-b195-c52e4167bac6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:02:37 crc kubenswrapper[4698]: E1006 12:02:37.873866 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-cmcgd" podUID="a181ed38-72eb-491d-b195-c52e4167bac6" Oct 06 12:02:38 crc kubenswrapper[4698]: I1006 12:02:38.455538 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-645e-account-create-kb64v"] Oct 06 12:02:38 crc kubenswrapper[4698]: I1006 12:02:38.560752 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-55ab-account-create-flpff"] Oct 06 12:02:38 crc kubenswrapper[4698]: I1006 12:02:38.573371 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3084-account-create-wn4b7"] Oct 06 12:02:38 crc kubenswrapper[4698]: W1006 12:02:38.573774 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d39ea82_8ea0_4df9_97ba_10ae91856a58.slice/crio-3155ef87dc6120b93c8d24c7c82e061432103f9e24294920cd5d1e2f11141172 WatchSource:0}: Error finding container 3155ef87dc6120b93c8d24c7c82e061432103f9e24294920cd5d1e2f11141172: Status 404 returned error can't find the container with id 3155ef87dc6120b93c8d24c7c82e061432103f9e24294920cd5d1e2f11141172 Oct 06 12:02:38 crc kubenswrapper[4698]: I1006 12:02:38.748608 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3084-account-create-wn4b7" event={"ID":"6d39ea82-8ea0-4df9-97ba-10ae91856a58","Type":"ContainerStarted","Data":"3155ef87dc6120b93c8d24c7c82e061432103f9e24294920cd5d1e2f11141172"} Oct 06 12:02:38 crc kubenswrapper[4698]: I1006 12:02:38.750683 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-55ab-account-create-flpff" event={"ID":"6cd003f6-4f1f-417c-b4cf-412d9c06cb3c","Type":"ContainerStarted","Data":"0f74403c56522bf4d2ff2ce6457d1513f31b20b44b0b96f98cae8b670dff656e"} Oct 06 12:02:38 crc kubenswrapper[4698]: I1006 12:02:38.755191 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"537491e76923615a897dbf0447de1807a4d1dfc47da4fdac2a8e8e43e100524c"} Oct 06 12:02:38 crc kubenswrapper[4698]: I1006 12:02:38.755237 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"d353a21ae0a74ca43d70db06ece790f726a4a88f1ac89329fd06c6f24fdd1a7b"} Oct 06 12:02:38 crc kubenswrapper[4698]: I1006 12:02:38.758000 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40388b3e-433c-484a-b0aa-c7e427601657","Type":"ContainerStarted","Data":"f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5"} Oct 06 12:02:38 crc kubenswrapper[4698]: I1006 12:02:38.760956 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645e-account-create-kb64v" event={"ID":"ad0c1e41-ba16-4e0b-968d-96f9cf129d89","Type":"ContainerStarted","Data":"058f8775cfc88bf147adda8c6685cdee5ef5976fef0efc28b03e092d0cbfab6e"} Oct 06 12:02:38 crc kubenswrapper[4698]: E1006 12:02:38.762863 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-cmcgd" podUID="a181ed38-72eb-491d-b195-c52e4167bac6" Oct 06 12:02:39 crc kubenswrapper[4698]: I1006 12:02:39.784999 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"d1ebe5b552fe7218430728bb53b5a9d0c48a6729872789226f53c0e7be265093"} Oct 06 12:02:39 crc kubenswrapper[4698]: I1006 12:02:39.786244 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"95fe37eb2a39749cb218b989445bcef3a097994c2a6efc428d8363bc7d698465"} Oct 06 12:02:39 crc kubenswrapper[4698]: I1006 12:02:39.791089 4698 generic.go:334] "Generic (PLEG): container finished" podID="ad0c1e41-ba16-4e0b-968d-96f9cf129d89" containerID="9423c86be3b50ee413bf6ba5829f1997f4559bb26ebc2f0b432405c4f466ab7e" exitCode=0 Oct 06 12:02:39 crc kubenswrapper[4698]: I1006 12:02:39.791243 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645e-account-create-kb64v" event={"ID":"ad0c1e41-ba16-4e0b-968d-96f9cf129d89","Type":"ContainerDied","Data":"9423c86be3b50ee413bf6ba5829f1997f4559bb26ebc2f0b432405c4f466ab7e"} Oct 06 12:02:39 crc kubenswrapper[4698]: I1006 12:02:39.795668 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-55ab-account-create-flpff" event={"ID":"6cd003f6-4f1f-417c-b4cf-412d9c06cb3c","Type":"ContainerStarted","Data":"0cab26195ffb38d052537d2975fa777d5eff7334af27328416bb13ceff20f85c"} Oct 06 12:02:39 crc kubenswrapper[4698]: I1006 12:02:39.800579 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3084-account-create-wn4b7" event={"ID":"6d39ea82-8ea0-4df9-97ba-10ae91856a58","Type":"ContainerStarted","Data":"c9f831d0a6c1b81f49262d9013b3cde99c1d741c216502412296f3bee4937dee"} Oct 06 12:02:39 crc kubenswrapper[4698]: I1006 12:02:39.852546 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-3084-account-create-wn4b7" podStartSLOduration=8.852524654 podStartE2EDuration="8.852524654s" podCreationTimestamp="2025-10-06 12:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:02:39.843507771 +0000 UTC m=+1047.256199954" watchObservedRunningTime="2025-10-06 12:02:39.852524654 +0000 UTC m=+1047.265216837" Oct 06 12:02:40 crc kubenswrapper[4698]: I1006 12:02:40.812471 4698 generic.go:334] "Generic (PLEG): container finished" podID="6cd003f6-4f1f-417c-b4cf-412d9c06cb3c" containerID="0cab26195ffb38d052537d2975fa777d5eff7334af27328416bb13ceff20f85c" exitCode=0 Oct 06 12:02:40 crc kubenswrapper[4698]: I1006 12:02:40.812591 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-55ab-account-create-flpff" event={"ID":"6cd003f6-4f1f-417c-b4cf-412d9c06cb3c","Type":"ContainerDied","Data":"0cab26195ffb38d052537d2975fa777d5eff7334af27328416bb13ceff20f85c"} Oct 06 12:02:40 crc kubenswrapper[4698]: I1006 12:02:40.817490 4698 generic.go:334] "Generic (PLEG): container finished" podID="6d39ea82-8ea0-4df9-97ba-10ae91856a58" containerID="c9f831d0a6c1b81f49262d9013b3cde99c1d741c216502412296f3bee4937dee" exitCode=0 Oct 06 12:02:40 crc kubenswrapper[4698]: I1006 12:02:40.817709 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3084-account-create-wn4b7" event={"ID":"6d39ea82-8ea0-4df9-97ba-10ae91856a58","Type":"ContainerDied","Data":"c9f831d0a6c1b81f49262d9013b3cde99c1d741c216502412296f3bee4937dee"} Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.236437 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645e-account-create-kb64v" Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.400851 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-545rj\" (UniqueName: \"kubernetes.io/projected/ad0c1e41-ba16-4e0b-968d-96f9cf129d89-kube-api-access-545rj\") pod \"ad0c1e41-ba16-4e0b-968d-96f9cf129d89\" (UID: \"ad0c1e41-ba16-4e0b-968d-96f9cf129d89\") " Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.415662 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0c1e41-ba16-4e0b-968d-96f9cf129d89-kube-api-access-545rj" (OuterVolumeSpecName: "kube-api-access-545rj") pod "ad0c1e41-ba16-4e0b-968d-96f9cf129d89" (UID: "ad0c1e41-ba16-4e0b-968d-96f9cf129d89"). InnerVolumeSpecName "kube-api-access-545rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.503005 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-545rj\" (UniqueName: \"kubernetes.io/projected/ad0c1e41-ba16-4e0b-968d-96f9cf129d89-kube-api-access-545rj\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.836300 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"429d3b85ac9cc26567e08c478e87a2686a6acdcf1b55e47f51125099d22fecb4"} Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.836360 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"bbb8e18c9e0b3415d3f61ecf445e1807735c65bbb8f34c1d0761971bf879a7a0"} Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.841947 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40388b3e-433c-484a-b0aa-c7e427601657","Type":"ContainerStarted","Data":"3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202"} Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.841990 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40388b3e-433c-484a-b0aa-c7e427601657","Type":"ContainerStarted","Data":"59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7"} Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.851616 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645e-account-create-kb64v" event={"ID":"ad0c1e41-ba16-4e0b-968d-96f9cf129d89","Type":"ContainerDied","Data":"058f8775cfc88bf147adda8c6685cdee5ef5976fef0efc28b03e092d0cbfab6e"} Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.851730 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="058f8775cfc88bf147adda8c6685cdee5ef5976fef0efc28b03e092d0cbfab6e" Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.851845 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645e-account-create-kb64v" Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.891859 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.895440 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.896212 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=30.896173634 podStartE2EDuration="30.896173634s" podCreationTimestamp="2025-10-06 12:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:02:41.880199548 +0000 UTC m=+1049.292891751" watchObservedRunningTime="2025-10-06 12:02:41.896173634 +0000 UTC m=+1049.308865817" Oct 06 12:02:41 crc kubenswrapper[4698]: I1006 12:02:41.905754 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.357348 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-55ab-account-create-flpff" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.361895 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3084-account-create-wn4b7" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.530750 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfqmq\" (UniqueName: \"kubernetes.io/projected/6cd003f6-4f1f-417c-b4cf-412d9c06cb3c-kube-api-access-vfqmq\") pod \"6cd003f6-4f1f-417c-b4cf-412d9c06cb3c\" (UID: \"6cd003f6-4f1f-417c-b4cf-412d9c06cb3c\") " Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.531398 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29wjr\" (UniqueName: \"kubernetes.io/projected/6d39ea82-8ea0-4df9-97ba-10ae91856a58-kube-api-access-29wjr\") pod \"6d39ea82-8ea0-4df9-97ba-10ae91856a58\" (UID: \"6d39ea82-8ea0-4df9-97ba-10ae91856a58\") " Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.539357 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d39ea82-8ea0-4df9-97ba-10ae91856a58-kube-api-access-29wjr" (OuterVolumeSpecName: "kube-api-access-29wjr") pod "6d39ea82-8ea0-4df9-97ba-10ae91856a58" (UID: "6d39ea82-8ea0-4df9-97ba-10ae91856a58"). InnerVolumeSpecName "kube-api-access-29wjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.541091 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd003f6-4f1f-417c-b4cf-412d9c06cb3c-kube-api-access-vfqmq" (OuterVolumeSpecName: "kube-api-access-vfqmq") pod "6cd003f6-4f1f-417c-b4cf-412d9c06cb3c" (UID: "6cd003f6-4f1f-417c-b4cf-412d9c06cb3c"). InnerVolumeSpecName "kube-api-access-vfqmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.634659 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29wjr\" (UniqueName: \"kubernetes.io/projected/6d39ea82-8ea0-4df9-97ba-10ae91856a58-kube-api-access-29wjr\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.634716 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfqmq\" (UniqueName: \"kubernetes.io/projected/6cd003f6-4f1f-417c-b4cf-412d9c06cb3c-kube-api-access-vfqmq\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.892484 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"c871100d04b1e6f65430d06c1ab3b8341b2eb4a03232dcc9e78d04d2f84af2fe"} Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.892557 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"3f40982ae93116ddbfb869234eaaca72c2409116e0fa8e0e718bba1ad708d4af"} Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.895907 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-55ab-account-create-flpff" event={"ID":"6cd003f6-4f1f-417c-b4cf-412d9c06cb3c","Type":"ContainerDied","Data":"0f74403c56522bf4d2ff2ce6457d1513f31b20b44b0b96f98cae8b670dff656e"} Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.895957 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f74403c56522bf4d2ff2ce6457d1513f31b20b44b0b96f98cae8b670dff656e" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.896103 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-55ab-account-create-flpff" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.902208 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3084-account-create-wn4b7" event={"ID":"6d39ea82-8ea0-4df9-97ba-10ae91856a58","Type":"ContainerDied","Data":"3155ef87dc6120b93c8d24c7c82e061432103f9e24294920cd5d1e2f11141172"} Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.902261 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3084-account-create-wn4b7" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.902280 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3155ef87dc6120b93c8d24c7c82e061432103f9e24294920cd5d1e2f11141172" Oct 06 12:02:42 crc kubenswrapper[4698]: I1006 12:02:42.908237 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 06 12:02:43 crc kubenswrapper[4698]: I1006 12:02:43.919711 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"d4cae5af87a268316153a78b5f4095d2e6ad6ba6866e0f587ccf7a271d7060d5"} Oct 06 12:02:43 crc kubenswrapper[4698]: I1006 12:02:43.920150 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"81a011812292e1c0fb57d8b6b846f61d23d9175f38e27db800094fbd2fbf7845"} Oct 06 12:02:43 crc kubenswrapper[4698]: I1006 12:02:43.920164 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"240ac959-0487-47d4-b219-7741b2127f50","Type":"ContainerStarted","Data":"2bc02289f743bf7105a7cacf64c71c3caf28fa5a1b7659fc9ee7637e3ad3210b"} Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.249052 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.567499758 podStartE2EDuration="1m10.24899685s" podCreationTimestamp="2025-10-06 12:01:34 +0000 UTC" firstStartedPulling="2025-10-06 12:02:08.411898961 +0000 UTC m=+1015.824591134" lastFinishedPulling="2025-10-06 12:02:41.093396053 +0000 UTC m=+1048.506088226" observedRunningTime="2025-10-06 12:02:43.964484694 +0000 UTC m=+1051.377176877" watchObservedRunningTime="2025-10-06 12:02:44.24899685 +0000 UTC m=+1051.661689033" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.251735 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-52h5d"] Oct 06 12:02:44 crc kubenswrapper[4698]: E1006 12:02:44.252247 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd003f6-4f1f-417c-b4cf-412d9c06cb3c" containerName="mariadb-account-create" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.252285 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd003f6-4f1f-417c-b4cf-412d9c06cb3c" containerName="mariadb-account-create" Oct 06 12:02:44 crc kubenswrapper[4698]: E1006 12:02:44.252304 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d39ea82-8ea0-4df9-97ba-10ae91856a58" containerName="mariadb-account-create" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.252320 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d39ea82-8ea0-4df9-97ba-10ae91856a58" containerName="mariadb-account-create" Oct 06 12:02:44 crc kubenswrapper[4698]: E1006 12:02:44.252379 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0c1e41-ba16-4e0b-968d-96f9cf129d89" containerName="mariadb-account-create" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.252388 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0c1e41-ba16-4e0b-968d-96f9cf129d89" containerName="mariadb-account-create" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.252620 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd003f6-4f1f-417c-b4cf-412d9c06cb3c" containerName="mariadb-account-create" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.252649 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0c1e41-ba16-4e0b-968d-96f9cf129d89" containerName="mariadb-account-create" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.252678 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d39ea82-8ea0-4df9-97ba-10ae91856a58" containerName="mariadb-account-create" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.253888 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.258404 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.273306 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-52h5d"] Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.376245 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.376686 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.376772 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.376801 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.376833 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxlk\" (UniqueName: \"kubernetes.io/projected/513a0662-fda2-4dd1-b6c7-132f646ffd9d-kube-api-access-kjxlk\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.376855 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-config\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.478359 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.478505 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.478536 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.478564 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjxlk\" (UniqueName: \"kubernetes.io/projected/513a0662-fda2-4dd1-b6c7-132f646ffd9d-kube-api-access-kjxlk\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.478592 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-config\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.478677 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.479842 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.480167 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.480798 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-config\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.481793 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.482664 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.506557 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjxlk\" (UniqueName: \"kubernetes.io/projected/513a0662-fda2-4dd1-b6c7-132f646ffd9d-kube-api-access-kjxlk\") pod \"dnsmasq-dns-77585f5f8c-52h5d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:44 crc kubenswrapper[4698]: I1006 12:02:44.582234 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:45 crc kubenswrapper[4698]: I1006 12:02:45.139733 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-52h5d"] Oct 06 12:02:45 crc kubenswrapper[4698]: W1006 12:02:45.146475 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod513a0662_fda2_4dd1_b6c7_132f646ffd9d.slice/crio-02bef7f3068cc468bf6c5238cba1fd2e96555f246beb3268fd2041a1a9a38ae7 WatchSource:0}: Error finding container 02bef7f3068cc468bf6c5238cba1fd2e96555f246beb3268fd2041a1a9a38ae7: Status 404 returned error can't find the container with id 02bef7f3068cc468bf6c5238cba1fd2e96555f246beb3268fd2041a1a9a38ae7 Oct 06 12:02:45 crc kubenswrapper[4698]: I1006 12:02:45.945533 4698 generic.go:334] "Generic (PLEG): container finished" podID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerID="c77c3b8201f9c50736cf0d52edd120629f090882795d77f2e1c2300975940185" exitCode=0 Oct 06 12:02:45 crc kubenswrapper[4698]: I1006 12:02:45.945647 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" event={"ID":"513a0662-fda2-4dd1-b6c7-132f646ffd9d","Type":"ContainerDied","Data":"c77c3b8201f9c50736cf0d52edd120629f090882795d77f2e1c2300975940185"} Oct 06 12:02:45 crc kubenswrapper[4698]: I1006 12:02:45.946330 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" event={"ID":"513a0662-fda2-4dd1-b6c7-132f646ffd9d","Type":"ContainerStarted","Data":"02bef7f3068cc468bf6c5238cba1fd2e96555f246beb3268fd2041a1a9a38ae7"} Oct 06 12:02:46 crc kubenswrapper[4698]: I1006 12:02:46.958145 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" event={"ID":"513a0662-fda2-4dd1-b6c7-132f646ffd9d","Type":"ContainerStarted","Data":"ae27d4cfa04b34f13c1644ac065d511cc78048696707568a218c79f588260ffb"} Oct 06 12:02:46 crc kubenswrapper[4698]: I1006 12:02:46.958789 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:46 crc kubenswrapper[4698]: I1006 12:02:46.989786 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" podStartSLOduration=2.98975774 podStartE2EDuration="2.98975774s" podCreationTimestamp="2025-10-06 12:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:02:46.989231128 +0000 UTC m=+1054.401923331" watchObservedRunningTime="2025-10-06 12:02:46.98975774 +0000 UTC m=+1054.402449953" Oct 06 12:02:47 crc kubenswrapper[4698]: I1006 12:02:47.977685 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4xb2s" event={"ID":"c922a4e8-475c-438a-88d9-8d33f597fda6","Type":"ContainerStarted","Data":"69ea6df088b137f18802533343d7250a925e3061cf765b6b4ed9f830cbb31b86"} Oct 06 12:02:48 crc kubenswrapper[4698]: I1006 12:02:48.014270 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-4xb2s" podStartSLOduration=2.686527239 podStartE2EDuration="37.014232091s" podCreationTimestamp="2025-10-06 12:02:11 +0000 UTC" firstStartedPulling="2025-10-06 12:02:13.103125928 +0000 UTC m=+1020.515818091" lastFinishedPulling="2025-10-06 12:02:47.43083072 +0000 UTC m=+1054.843522943" observedRunningTime="2025-10-06 12:02:48.010392505 +0000 UTC m=+1055.423084688" watchObservedRunningTime="2025-10-06 12:02:48.014232091 +0000 UTC m=+1055.426924304" Oct 06 12:02:50 crc kubenswrapper[4698]: I1006 12:02:50.010823 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r2jp7" event={"ID":"95c6365d-fa8b-4f4e-9683-e021e05882ff","Type":"ContainerStarted","Data":"3120db0d68e02bb3d6653182209124dd8a0e844037126303e7d0e9429f65bc62"} Oct 06 12:02:51 crc kubenswrapper[4698]: I1006 12:02:51.023348 4698 generic.go:334] "Generic (PLEG): container finished" podID="c922a4e8-475c-438a-88d9-8d33f597fda6" containerID="69ea6df088b137f18802533343d7250a925e3061cf765b6b4ed9f830cbb31b86" exitCode=0 Oct 06 12:02:51 crc kubenswrapper[4698]: I1006 12:02:51.023486 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4xb2s" event={"ID":"c922a4e8-475c-438a-88d9-8d33f597fda6","Type":"ContainerDied","Data":"69ea6df088b137f18802533343d7250a925e3061cf765b6b4ed9f830cbb31b86"} Oct 06 12:02:51 crc kubenswrapper[4698]: I1006 12:02:51.058337 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-r2jp7" podStartSLOduration=3.561127531 podStartE2EDuration="43.058316953s" podCreationTimestamp="2025-10-06 12:02:08 +0000 UTC" firstStartedPulling="2025-10-06 12:02:09.297052735 +0000 UTC m=+1016.709744908" lastFinishedPulling="2025-10-06 12:02:48.794242157 +0000 UTC m=+1056.206934330" observedRunningTime="2025-10-06 12:02:50.03896983 +0000 UTC m=+1057.451662013" watchObservedRunningTime="2025-10-06 12:02:51.058316953 +0000 UTC m=+1058.471009126" Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.435770 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.581210 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-db-sync-config-data\") pod \"c922a4e8-475c-438a-88d9-8d33f597fda6\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.581310 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-config-data\") pod \"c922a4e8-475c-438a-88d9-8d33f597fda6\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.581800 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq9lb\" (UniqueName: \"kubernetes.io/projected/c922a4e8-475c-438a-88d9-8d33f597fda6-kube-api-access-sq9lb\") pod \"c922a4e8-475c-438a-88d9-8d33f597fda6\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.585925 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-combined-ca-bundle\") pod \"c922a4e8-475c-438a-88d9-8d33f597fda6\" (UID: \"c922a4e8-475c-438a-88d9-8d33f597fda6\") " Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.586974 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c922a4e8-475c-438a-88d9-8d33f597fda6" (UID: "c922a4e8-475c-438a-88d9-8d33f597fda6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.589298 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c922a4e8-475c-438a-88d9-8d33f597fda6-kube-api-access-sq9lb" (OuterVolumeSpecName: "kube-api-access-sq9lb") pod "c922a4e8-475c-438a-88d9-8d33f597fda6" (UID: "c922a4e8-475c-438a-88d9-8d33f597fda6"). InnerVolumeSpecName "kube-api-access-sq9lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.618752 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c922a4e8-475c-438a-88d9-8d33f597fda6" (UID: "c922a4e8-475c-438a-88d9-8d33f597fda6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.641439 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-config-data" (OuterVolumeSpecName: "config-data") pod "c922a4e8-475c-438a-88d9-8d33f597fda6" (UID: "c922a4e8-475c-438a-88d9-8d33f597fda6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.688134 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.688181 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.688196 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c922a4e8-475c-438a-88d9-8d33f597fda6-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:52 crc kubenswrapper[4698]: I1006 12:02:52.688211 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq9lb\" (UniqueName: \"kubernetes.io/projected/c922a4e8-475c-438a-88d9-8d33f597fda6-kube-api-access-sq9lb\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:53 crc kubenswrapper[4698]: I1006 12:02:53.051830 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cmcgd" event={"ID":"a181ed38-72eb-491d-b195-c52e4167bac6","Type":"ContainerStarted","Data":"dd3aa85021520da22b0431a1a148ea4d5be003cde0aa3869286d92941fa57a85"} Oct 06 12:02:53 crc kubenswrapper[4698]: I1006 12:02:53.054564 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4xb2s" event={"ID":"c922a4e8-475c-438a-88d9-8d33f597fda6","Type":"ContainerDied","Data":"d5cf1cc7851786aad8814a4ed5d9fd7761568cee92d924781b7c41ba31ac893f"} Oct 06 12:02:53 crc kubenswrapper[4698]: I1006 12:02:53.054627 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5cf1cc7851786aad8814a4ed5d9fd7761568cee92d924781b7c41ba31ac893f" Oct 06 12:02:53 crc kubenswrapper[4698]: I1006 12:02:53.054679 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4xb2s" Oct 06 12:02:53 crc kubenswrapper[4698]: I1006 12:02:53.096326 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-cmcgd" podStartSLOduration=2.364104104 podStartE2EDuration="41.096297751s" podCreationTimestamp="2025-10-06 12:02:12 +0000 UTC" firstStartedPulling="2025-10-06 12:02:13.205961888 +0000 UTC m=+1020.618654061" lastFinishedPulling="2025-10-06 12:02:51.938155495 +0000 UTC m=+1059.350847708" observedRunningTime="2025-10-06 12:02:53.074572092 +0000 UTC m=+1060.487264275" watchObservedRunningTime="2025-10-06 12:02:53.096297751 +0000 UTC m=+1060.508989934" Oct 06 12:02:54 crc kubenswrapper[4698]: I1006 12:02:54.584351 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:02:54 crc kubenswrapper[4698]: I1006 12:02:54.682881 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fxtdm"] Oct 06 12:02:54 crc kubenswrapper[4698]: I1006 12:02:54.683275 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-fxtdm" podUID="7730f6e1-8a03-463f-90d6-41d706536495" containerName="dnsmasq-dns" containerID="cri-o://f39d036aa9958cfb84c55ffb75b12c489d67f27e2ab7854d72168110d8ddbd5d" gracePeriod=10 Oct 06 12:02:54 crc kubenswrapper[4698]: I1006 12:02:54.956395 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-fxtdm" podUID="7730f6e1-8a03-463f-90d6-41d706536495" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.122:5353: connect: connection refused" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.081669 4698 generic.go:334] "Generic (PLEG): container finished" podID="7730f6e1-8a03-463f-90d6-41d706536495" containerID="f39d036aa9958cfb84c55ffb75b12c489d67f27e2ab7854d72168110d8ddbd5d" exitCode=0 Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.081710 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fxtdm" event={"ID":"7730f6e1-8a03-463f-90d6-41d706536495","Type":"ContainerDied","Data":"f39d036aa9958cfb84c55ffb75b12c489d67f27e2ab7854d72168110d8ddbd5d"} Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.234771 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.234846 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.240282 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.337349 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-dns-svc\") pod \"7730f6e1-8a03-463f-90d6-41d706536495\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.337399 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-nb\") pod \"7730f6e1-8a03-463f-90d6-41d706536495\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.337492 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-config\") pod \"7730f6e1-8a03-463f-90d6-41d706536495\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.337581 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-sb\") pod \"7730f6e1-8a03-463f-90d6-41d706536495\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.337700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4672r\" (UniqueName: \"kubernetes.io/projected/7730f6e1-8a03-463f-90d6-41d706536495-kube-api-access-4672r\") pod \"7730f6e1-8a03-463f-90d6-41d706536495\" (UID: \"7730f6e1-8a03-463f-90d6-41d706536495\") " Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.354808 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7730f6e1-8a03-463f-90d6-41d706536495-kube-api-access-4672r" (OuterVolumeSpecName: "kube-api-access-4672r") pod "7730f6e1-8a03-463f-90d6-41d706536495" (UID: "7730f6e1-8a03-463f-90d6-41d706536495"). InnerVolumeSpecName "kube-api-access-4672r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.380231 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-config" (OuterVolumeSpecName: "config") pod "7730f6e1-8a03-463f-90d6-41d706536495" (UID: "7730f6e1-8a03-463f-90d6-41d706536495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.384644 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7730f6e1-8a03-463f-90d6-41d706536495" (UID: "7730f6e1-8a03-463f-90d6-41d706536495"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.389231 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7730f6e1-8a03-463f-90d6-41d706536495" (UID: "7730f6e1-8a03-463f-90d6-41d706536495"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.401556 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7730f6e1-8a03-463f-90d6-41d706536495" (UID: "7730f6e1-8a03-463f-90d6-41d706536495"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.440867 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.440896 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.440907 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.440916 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7730f6e1-8a03-463f-90d6-41d706536495-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:55 crc kubenswrapper[4698]: I1006 12:02:55.440924 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4672r\" (UniqueName: \"kubernetes.io/projected/7730f6e1-8a03-463f-90d6-41d706536495-kube-api-access-4672r\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:56 crc kubenswrapper[4698]: I1006 12:02:56.095318 4698 generic.go:334] "Generic (PLEG): container finished" podID="a181ed38-72eb-491d-b195-c52e4167bac6" containerID="dd3aa85021520da22b0431a1a148ea4d5be003cde0aa3869286d92941fa57a85" exitCode=0 Oct 06 12:02:56 crc kubenswrapper[4698]: I1006 12:02:56.095426 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cmcgd" event={"ID":"a181ed38-72eb-491d-b195-c52e4167bac6","Type":"ContainerDied","Data":"dd3aa85021520da22b0431a1a148ea4d5be003cde0aa3869286d92941fa57a85"} Oct 06 12:02:56 crc kubenswrapper[4698]: I1006 12:02:56.098431 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fxtdm" event={"ID":"7730f6e1-8a03-463f-90d6-41d706536495","Type":"ContainerDied","Data":"e221609c2e2504ada71329df395c93df3abc77e688970937dcb37f2a69644644"} Oct 06 12:02:56 crc kubenswrapper[4698]: I1006 12:02:56.098496 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fxtdm" Oct 06 12:02:56 crc kubenswrapper[4698]: I1006 12:02:56.098527 4698 scope.go:117] "RemoveContainer" containerID="f39d036aa9958cfb84c55ffb75b12c489d67f27e2ab7854d72168110d8ddbd5d" Oct 06 12:02:56 crc kubenswrapper[4698]: I1006 12:02:56.121148 4698 scope.go:117] "RemoveContainer" containerID="2d4263676dd70ee5d46cd8aec9d16e8fff62f01b070c06715b30d200a2ac167a" Oct 06 12:02:56 crc kubenswrapper[4698]: I1006 12:02:56.148623 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fxtdm"] Oct 06 12:02:56 crc kubenswrapper[4698]: I1006 12:02:56.158474 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fxtdm"] Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.118061 4698 generic.go:334] "Generic (PLEG): container finished" podID="95c6365d-fa8b-4f4e-9683-e021e05882ff" containerID="3120db0d68e02bb3d6653182209124dd8a0e844037126303e7d0e9429f65bc62" exitCode=0 Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.118537 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r2jp7" event={"ID":"95c6365d-fa8b-4f4e-9683-e021e05882ff","Type":"ContainerDied","Data":"3120db0d68e02bb3d6653182209124dd8a0e844037126303e7d0e9429f65bc62"} Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.347110 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7730f6e1-8a03-463f-90d6-41d706536495" path="/var/lib/kubelet/pods/7730f6e1-8a03-463f-90d6-41d706536495/volumes" Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.569099 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.693658 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-combined-ca-bundle\") pod \"a181ed38-72eb-491d-b195-c52e4167bac6\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.693907 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f8kj\" (UniqueName: \"kubernetes.io/projected/a181ed38-72eb-491d-b195-c52e4167bac6-kube-api-access-5f8kj\") pod \"a181ed38-72eb-491d-b195-c52e4167bac6\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.693937 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-config-data\") pod \"a181ed38-72eb-491d-b195-c52e4167bac6\" (UID: \"a181ed38-72eb-491d-b195-c52e4167bac6\") " Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.701819 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a181ed38-72eb-491d-b195-c52e4167bac6-kube-api-access-5f8kj" (OuterVolumeSpecName: "kube-api-access-5f8kj") pod "a181ed38-72eb-491d-b195-c52e4167bac6" (UID: "a181ed38-72eb-491d-b195-c52e4167bac6"). InnerVolumeSpecName "kube-api-access-5f8kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.727100 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a181ed38-72eb-491d-b195-c52e4167bac6" (UID: "a181ed38-72eb-491d-b195-c52e4167bac6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.796555 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.796621 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f8kj\" (UniqueName: \"kubernetes.io/projected/a181ed38-72eb-491d-b195-c52e4167bac6-kube-api-access-5f8kj\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.812629 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-config-data" (OuterVolumeSpecName: "config-data") pod "a181ed38-72eb-491d-b195-c52e4167bac6" (UID: "a181ed38-72eb-491d-b195-c52e4167bac6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:57 crc kubenswrapper[4698]: I1006 12:02:57.899258 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a181ed38-72eb-491d-b195-c52e4167bac6-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.133302 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cmcgd" event={"ID":"a181ed38-72eb-491d-b195-c52e4167bac6","Type":"ContainerDied","Data":"0591bf729285e334e7a3ec3e3b34f3ce4019448eab22b0b2e2a6a26c1bf03d9b"} Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.133780 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0591bf729285e334e7a3ec3e3b34f3ce4019448eab22b0b2e2a6a26c1bf03d9b" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.133334 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cmcgd" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.422306 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5p4n5"] Oct 06 12:02:58 crc kubenswrapper[4698]: E1006 12:02:58.422721 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7730f6e1-8a03-463f-90d6-41d706536495" containerName="dnsmasq-dns" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.422740 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7730f6e1-8a03-463f-90d6-41d706536495" containerName="dnsmasq-dns" Oct 06 12:02:58 crc kubenswrapper[4698]: E1006 12:02:58.422755 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7730f6e1-8a03-463f-90d6-41d706536495" containerName="init" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.422762 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7730f6e1-8a03-463f-90d6-41d706536495" containerName="init" Oct 06 12:02:58 crc kubenswrapper[4698]: E1006 12:02:58.422773 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a181ed38-72eb-491d-b195-c52e4167bac6" containerName="keystone-db-sync" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.422779 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a181ed38-72eb-491d-b195-c52e4167bac6" containerName="keystone-db-sync" Oct 06 12:02:58 crc kubenswrapper[4698]: E1006 12:02:58.422804 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c922a4e8-475c-438a-88d9-8d33f597fda6" containerName="watcher-db-sync" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.422810 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c922a4e8-475c-438a-88d9-8d33f597fda6" containerName="watcher-db-sync" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.422986 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7730f6e1-8a03-463f-90d6-41d706536495" containerName="dnsmasq-dns" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.423003 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a181ed38-72eb-491d-b195-c52e4167bac6" containerName="keystone-db-sync" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.423018 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c922a4e8-475c-438a-88d9-8d33f597fda6" containerName="watcher-db-sync" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.423657 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.433646 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.433837 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.433948 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dw6fp" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.434095 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.441497 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-rgkkq"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.443227 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.450165 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5p4n5"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.467125 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-rgkkq"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.511287 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518324 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-svc\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518378 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-config\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518403 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518446 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5lhg\" (UniqueName: \"kubernetes.io/projected/b82fa017-f6c6-4b51-ab1f-abf36396d690-kube-api-access-g5lhg\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518473 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-combined-ca-bundle\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518493 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518517 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-credential-keys\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518539 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-fernet-keys\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518563 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-config-data\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518580 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhjqc\" (UniqueName: \"kubernetes.io/projected/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-kube-api-access-mhjqc\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518596 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.518639 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-scripts\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.520207 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.526430 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.532244 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-cw6dm" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.546843 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.548439 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.555616 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.600484 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.601738 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.610330 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621340 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-combined-ca-bundle\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621405 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621445 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-credential-keys\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621469 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-fernet-keys\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621498 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-config-data\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621519 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhjqc\" (UniqueName: \"kubernetes.io/projected/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-kube-api-access-mhjqc\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621542 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621572 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87qjw\" (UniqueName: \"kubernetes.io/projected/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-kube-api-access-87qjw\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621600 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b3304f0-cef5-447b-9b57-be2830595b5b-logs\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621625 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621648 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-logs\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621668 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-scripts\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621694 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621714 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxsr9\" (UniqueName: \"kubernetes.io/projected/0b3304f0-cef5-447b-9b57-be2830595b5b-kube-api-access-hxsr9\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621740 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-svc\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621768 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-config\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621786 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.621999 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.622041 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-config-data\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.622069 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-config-data\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.622090 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5lhg\" (UniqueName: \"kubernetes.io/projected/b82fa017-f6c6-4b51-ab1f-abf36396d690-kube-api-access-g5lhg\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.625654 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-config\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.626262 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-svc\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.627207 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.627969 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.637722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-scripts\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.647842 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.652618 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.653728 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-fernet-keys\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.657523 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-credential-keys\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.677799 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-config-data\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.685242 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5lhg\" (UniqueName: \"kubernetes.io/projected/b82fa017-f6c6-4b51-ab1f-abf36396d690-kube-api-access-g5lhg\") pod \"dnsmasq-dns-55fff446b9-rgkkq\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.686739 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhjqc\" (UniqueName: \"kubernetes.io/projected/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-kube-api-access-mhjqc\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.698853 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-combined-ca-bundle\") pod \"keystone-bootstrap-5p4n5\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.723951 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.723999 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724059 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtd2p\" (UniqueName: \"kubernetes.io/projected/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-kube-api-access-gtd2p\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724119 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87qjw\" (UniqueName: \"kubernetes.io/projected/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-kube-api-access-87qjw\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724175 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b3304f0-cef5-447b-9b57-be2830595b5b-logs\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724198 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724227 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724246 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-logs\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724271 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-logs\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724302 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724324 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxsr9\" (UniqueName: \"kubernetes.io/projected/0b3304f0-cef5-447b-9b57-be2830595b5b-kube-api-access-hxsr9\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724371 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724398 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-config-data\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724422 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-config-data\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.724812 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b3304f0-cef5-447b-9b57-be2830595b5b-logs\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.726321 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-logs\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.730678 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.744337 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.746099 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.752101 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-config-data\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.753451 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.754154 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.766397 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxsr9\" (UniqueName: \"kubernetes.io/projected/0b3304f0-cef5-447b-9b57-be2830595b5b-kube-api-access-hxsr9\") pod \"watcher-api-0\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.769228 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-config-data\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.789040 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.801350 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87qjw\" (UniqueName: \"kubernetes.io/projected/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-kube-api-access-87qjw\") pod \"watcher-applier-0\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.808864 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.828267 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.828321 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.828348 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtd2p\" (UniqueName: \"kubernetes.io/projected/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-kube-api-access-gtd2p\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.828408 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.828434 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-logs\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.830136 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-logs\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.832218 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.832736 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.841204 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.844969 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.858318 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5974c58885-pt76k"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.862295 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.871183 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtd2p\" (UniqueName: \"kubernetes.io/projected/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-kube-api-access-gtd2p\") pod \"watcher-decision-engine-0\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.871277 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.871313 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.871544 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pgwrt" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.871651 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.876529 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-5xxbt"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.877761 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.887383 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.887722 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6nchg" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.888133 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.894263 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5974c58885-pt76k"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.911005 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-l9vdh"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.912581 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.913384 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.925090 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5xxbt"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.928440 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.928645 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-r69m6" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.928796 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.929853 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-config\") pod \"neutron-db-sync-5xxbt\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.932070 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-config-data\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.932163 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-combined-ca-bundle\") pod \"neutron-db-sync-5xxbt\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.932255 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-scripts\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.932389 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-logs\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.932471 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-horizon-secret-key\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.932562 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gt4s\" (UniqueName: \"kubernetes.io/projected/2c03557f-4b1f-4104-87a7-4a5880180c86-kube-api-access-6gt4s\") pod \"neutron-db-sync-5xxbt\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.932686 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj7gz\" (UniqueName: \"kubernetes.io/projected/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-kube-api-access-xj7gz\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.938094 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-l9vdh"] Oct 06 12:02:58 crc kubenswrapper[4698]: I1006 12:02:58.952604 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-rgkkq"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.014392 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.018139 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.023872 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.024183 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.042998 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-nc7nk"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044355 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044503 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-config-data\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044568 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj7gz\" (UniqueName: \"kubernetes.io/projected/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-kube-api-access-xj7gz\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044649 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-combined-ca-bundle\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044679 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-config\") pod \"neutron-db-sync-5xxbt\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044698 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-db-sync-config-data\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044760 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-config-data\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044818 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-combined-ca-bundle\") pod \"neutron-db-sync-5xxbt\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044865 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-scripts\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044902 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d620584e-f9cd-432a-9f55-9aa1f1056766-etc-machine-id\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.044998 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-logs\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.045037 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26kzc\" (UniqueName: \"kubernetes.io/projected/d620584e-f9cd-432a-9f55-9aa1f1056766-kube-api-access-26kzc\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.045116 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-horizon-secret-key\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.045152 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gt4s\" (UniqueName: \"kubernetes.io/projected/2c03557f-4b1f-4104-87a7-4a5880180c86-kube-api-access-6gt4s\") pod \"neutron-db-sync-5xxbt\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.045204 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-scripts\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.048847 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-config-data\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.049665 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-scripts\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.050174 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-logs\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.052177 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.064708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-combined-ca-bundle\") pod \"neutron-db-sync-5xxbt\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.071135 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-horizon-secret-key\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.071487 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-config\") pod \"neutron-db-sync-5xxbt\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.077683 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nc7nk"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.090846 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z89rc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.092746 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.104374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj7gz\" (UniqueName: \"kubernetes.io/projected/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-kube-api-access-xj7gz\") pod \"horizon-5974c58885-pt76k\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.113808 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.118296 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7pkpz"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.119509 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.126400 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.126662 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.126923 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jxgqh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.149751 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gt4s\" (UniqueName: \"kubernetes.io/projected/2c03557f-4b1f-4104-87a7-4a5880180c86-kube-api-access-6gt4s\") pod \"neutron-db-sync-5xxbt\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179105 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnglf\" (UniqueName: \"kubernetes.io/projected/df1bd773-04e5-4524-a48e-b7a65c983a89-kube-api-access-qnglf\") pod \"barbican-db-sync-nc7nk\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179163 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-db-sync-config-data\") pod \"barbican-db-sync-nc7nk\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179199 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-scripts\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179253 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-scripts\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179281 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-log-httpd\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179350 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-config-data\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179398 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179504 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-combined-ca-bundle\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179536 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-db-sync-config-data\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179604 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-run-httpd\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179628 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-config-data\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179777 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fkgk\" (UniqueName: \"kubernetes.io/projected/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-kube-api-access-7fkgk\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179838 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d620584e-f9cd-432a-9f55-9aa1f1056766-etc-machine-id\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179952 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26kzc\" (UniqueName: \"kubernetes.io/projected/d620584e-f9cd-432a-9f55-9aa1f1056766-kube-api-access-26kzc\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.179990 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-combined-ca-bundle\") pod \"barbican-db-sync-nc7nk\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.180422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.186598 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-scripts\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.189666 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d620584e-f9cd-432a-9f55-9aa1f1056766-etc-machine-id\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.201517 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-combined-ca-bundle\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.216282 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26kzc\" (UniqueName: \"kubernetes.io/projected/d620584e-f9cd-432a-9f55-9aa1f1056766-kube-api-access-26kzc\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.224033 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-4v2z8"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.231095 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-db-sync-config-data\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.239410 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.276225 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-config-data\") pod \"cinder-db-sync-l9vdh\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.277353 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.334075 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-combined-ca-bundle\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335318 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335344 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-combined-ca-bundle\") pod \"barbican-db-sync-nc7nk\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335408 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnglf\" (UniqueName: \"kubernetes.io/projected/df1bd773-04e5-4524-a48e-b7a65c983a89-kube-api-access-qnglf\") pod \"barbican-db-sync-nc7nk\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335429 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-db-sync-config-data\") pod \"barbican-db-sync-nc7nk\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335472 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-log-httpd\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335491 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-scripts\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335535 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-scripts\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335556 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/828a84bc-95cc-448b-a84c-4ca894dd754b-logs\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335590 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335657 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-run-httpd\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335676 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-config-data\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335713 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-config-data\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335755 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fkgk\" (UniqueName: \"kubernetes.io/projected/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-kube-api-access-7fkgk\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.335792 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlh78\" (UniqueName: \"kubernetes.io/projected/828a84bc-95cc-448b-a84c-4ca894dd754b-kube-api-access-vlh78\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.361664 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.370940 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-run-httpd\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.371574 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-log-httpd\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.374174 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.386356 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-config-data\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.388256 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.390676 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-db-sync-config-data\") pod \"barbican-db-sync-nc7nk\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.398612 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7pkpz"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.403641 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.410693 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnglf\" (UniqueName: \"kubernetes.io/projected/df1bd773-04e5-4524-a48e-b7a65c983a89-kube-api-access-qnglf\") pod \"barbican-db-sync-nc7nk\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.411274 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-scripts\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.413341 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-combined-ca-bundle\") pod \"barbican-db-sync-nc7nk\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.414771 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fkgk\" (UniqueName: \"kubernetes.io/projected/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-kube-api-access-7fkgk\") pod \"ceilometer-0\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.414843 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-4v2z8"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.420246 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r2jp7" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.424460 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.440877 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f748b7d89-lvwdc"] Oct 06 12:02:59 crc kubenswrapper[4698]: E1006 12:02:59.441378 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c6365d-fa8b-4f4e-9683-e021e05882ff" containerName="glance-db-sync" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.441391 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c6365d-fa8b-4f4e-9683-e021e05882ff" containerName="glance-db-sync" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.441550 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-scripts\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.441592 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c6365d-fa8b-4f4e-9683-e021e05882ff" containerName="glance-db-sync" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.441593 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.441619 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/828a84bc-95cc-448b-a84c-4ca894dd754b-logs\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.441660 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.441749 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.441851 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-config-data\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.441870 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-config\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.442641 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.445939 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/828a84bc-95cc-448b-a84c-4ca894dd754b-logs\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.453637 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t98p9\" (UniqueName: \"kubernetes.io/projected/4ce06b70-44c3-4df9-9195-ed30d283fca5-kube-api-access-t98p9\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.453759 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlh78\" (UniqueName: \"kubernetes.io/projected/828a84bc-95cc-448b-a84c-4ca894dd754b-kube-api-access-vlh78\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.453816 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.453860 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-combined-ca-bundle\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.461640 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-scripts\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.461660 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-config-data\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.462436 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-combined-ca-bundle\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.490107 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f748b7d89-lvwdc"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.491869 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlh78\" (UniqueName: \"kubernetes.io/projected/828a84bc-95cc-448b-a84c-4ca894dd754b-kube-api-access-vlh78\") pod \"placement-db-sync-7pkpz\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.507061 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7pkpz" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.514081 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.564230 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-config-data\") pod \"95c6365d-fa8b-4f4e-9683-e021e05882ff\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.564421 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv587\" (UniqueName: \"kubernetes.io/projected/95c6365d-fa8b-4f4e-9683-e021e05882ff-kube-api-access-vv587\") pod \"95c6365d-fa8b-4f4e-9683-e021e05882ff\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.564593 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-combined-ca-bundle\") pod \"95c6365d-fa8b-4f4e-9683-e021e05882ff\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.564683 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-db-sync-config-data\") pod \"95c6365d-fa8b-4f4e-9683-e021e05882ff\" (UID: \"95c6365d-fa8b-4f4e-9683-e021e05882ff\") " Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.564976 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-logs\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.572159 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.572270 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.572398 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-horizon-secret-key\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.572427 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.572495 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-scripts\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.572869 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcgfz\" (UniqueName: \"kubernetes.io/projected/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-kube-api-access-dcgfz\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.572924 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-config\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.572997 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t98p9\" (UniqueName: \"kubernetes.io/projected/4ce06b70-44c3-4df9-9195-ed30d283fca5-kube-api-access-t98p9\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.573050 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-config-data\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.573222 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.573445 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.574051 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.574954 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.575479 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.575897 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-config\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.580258 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "95c6365d-fa8b-4f4e-9683-e021e05882ff" (UID: "95c6365d-fa8b-4f4e-9683-e021e05882ff"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.580396 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c6365d-fa8b-4f4e-9683-e021e05882ff-kube-api-access-vv587" (OuterVolumeSpecName: "kube-api-access-vv587") pod "95c6365d-fa8b-4f4e-9683-e021e05882ff" (UID: "95c6365d-fa8b-4f4e-9683-e021e05882ff"). InnerVolumeSpecName "kube-api-access-vv587". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.605121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t98p9\" (UniqueName: \"kubernetes.io/projected/4ce06b70-44c3-4df9-9195-ed30d283fca5-kube-api-access-t98p9\") pod \"dnsmasq-dns-76fcf4b695-4v2z8\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.620609 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95c6365d-fa8b-4f4e-9683-e021e05882ff" (UID: "95c6365d-fa8b-4f4e-9683-e021e05882ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.634172 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-config-data" (OuterVolumeSpecName: "config-data") pod "95c6365d-fa8b-4f4e-9683-e021e05882ff" (UID: "95c6365d-fa8b-4f4e-9683-e021e05882ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.675339 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-logs\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.675478 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-horizon-secret-key\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.675519 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-scripts\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.675594 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcgfz\" (UniqueName: \"kubernetes.io/projected/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-kube-api-access-dcgfz\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.675654 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-config-data\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.675736 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.675752 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv587\" (UniqueName: \"kubernetes.io/projected/95c6365d-fa8b-4f4e-9683-e021e05882ff-kube-api-access-vv587\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.675765 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.675777 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95c6365d-fa8b-4f4e-9683-e021e05882ff-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.675983 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-logs\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.677307 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-scripts\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.677598 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-config-data\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.677849 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.687244 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-rgkkq"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.692513 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-horizon-secret-key\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.698823 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcgfz\" (UniqueName: \"kubernetes.io/projected/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-kube-api-access-dcgfz\") pod \"horizon-7f748b7d89-lvwdc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.743876 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5p4n5"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.751888 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.871348 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Oct 06 12:02:59 crc kubenswrapper[4698]: I1006 12:02:59.986783 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.001063 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:03:00 crc kubenswrapper[4698]: W1006 12:03:00.048902 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b3304f0_cef5_447b_9b57_be2830595b5b.slice/crio-3a73116adca764d88e4b3e23214bc0e86a1aada64051c50753d041f114925982 WatchSource:0}: Error finding container 3a73116adca764d88e4b3e23214bc0e86a1aada64051c50753d041f114925982: Status 404 returned error can't find the container with id 3a73116adca764d88e4b3e23214bc0e86a1aada64051c50753d041f114925982 Oct 06 12:03:00 crc kubenswrapper[4698]: W1006 12:03:00.056659 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c9f1455_9d47_4d18_bcfe_5deb642ded6c.slice/crio-dc71cc476528c2681832ec4f0be7c2fbbfb26aaca8e6a8e3076247e383a824b6 WatchSource:0}: Error finding container dc71cc476528c2681832ec4f0be7c2fbbfb26aaca8e6a8e3076247e383a824b6: Status 404 returned error can't find the container with id dc71cc476528c2681832ec4f0be7c2fbbfb26aaca8e6a8e3076247e383a824b6 Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.167211 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5xxbt"] Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.365553 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:03:00 crc kubenswrapper[4698]: W1006 12:03:00.378826 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7fb1575_bbc3_4d9f_a0ce_31652f935cac.slice/crio-c0891cfb60f91378cda01190c426d7e0595433fd8d58edc676f42e033dfc7e7b WatchSource:0}: Error finding container c0891cfb60f91378cda01190c426d7e0595433fd8d58edc676f42e033dfc7e7b: Status 404 returned error can't find the container with id c0891cfb60f91378cda01190c426d7e0595433fd8d58edc676f42e033dfc7e7b Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.409110 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r2jp7" event={"ID":"95c6365d-fa8b-4f4e-9683-e021e05882ff","Type":"ContainerDied","Data":"6a31a74abc83c53250761fc5270f5ea4ae18513447c21e037bf4bf6250dc8ea8"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.409155 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a31a74abc83c53250761fc5270f5ea4ae18513447c21e037bf4bf6250dc8ea8" Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.409230 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r2jp7" Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.417451 4698 generic.go:334] "Generic (PLEG): container finished" podID="b82fa017-f6c6-4b51-ab1f-abf36396d690" containerID="5fed0d0d3cae9afbf8e88743f0af9cc15d9cc45d69ad15f9ba4dcc2693672d17" exitCode=0 Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.417543 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" event={"ID":"b82fa017-f6c6-4b51-ab1f-abf36396d690","Type":"ContainerDied","Data":"5fed0d0d3cae9afbf8e88743f0af9cc15d9cc45d69ad15f9ba4dcc2693672d17"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.417577 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" event={"ID":"b82fa017-f6c6-4b51-ab1f-abf36396d690","Type":"ContainerStarted","Data":"ee18293e11b84b0af3f978565d398ba8228a639d5b9a8cf707add72cd9f47b9c"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.425295 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0c9f1455-9d47-4d18-bcfe-5deb642ded6c","Type":"ContainerStarted","Data":"dc71cc476528c2681832ec4f0be7c2fbbfb26aaca8e6a8e3076247e383a824b6"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.428843 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5p4n5" event={"ID":"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916","Type":"ContainerStarted","Data":"001abc23485f32f08cce21908ea37a8ab5c4b8a32a1885c412b12faf1b97a0a6"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.428894 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5p4n5" event={"ID":"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916","Type":"ContainerStarted","Data":"cd77b6cf17e40b8a919928fa8f244cd17df71c4ce52c33651d4bd738fe9ab489"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.432255 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1","Type":"ContainerStarted","Data":"944cbb078af45fe10c94dd71df6e21e9a25c5cbdbc220f248bb5814182654b70"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.444088 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7fb1575-bbc3-4d9f-a0ce-31652f935cac","Type":"ContainerStarted","Data":"c0891cfb60f91378cda01190c426d7e0595433fd8d58edc676f42e033dfc7e7b"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.459319 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"0b3304f0-cef5-447b-9b57-be2830595b5b","Type":"ContainerStarted","Data":"917d92bb97c301465120fb5118139389d174cf6e72d9b51d68e5b7115b526bf5"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.459364 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"0b3304f0-cef5-447b-9b57-be2830595b5b","Type":"ContainerStarted","Data":"3a73116adca764d88e4b3e23214bc0e86a1aada64051c50753d041f114925982"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.462723 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5xxbt" event={"ID":"2c03557f-4b1f-4104-87a7-4a5880180c86","Type":"ContainerStarted","Data":"8a02fb7f326627fb038fce58f00640c1120dca73b5016e614005fa67856ba365"} Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.490006 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5p4n5" podStartSLOduration=2.489948796 podStartE2EDuration="2.489948796s" podCreationTimestamp="2025-10-06 12:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:00.453289507 +0000 UTC m=+1067.865981680" watchObservedRunningTime="2025-10-06 12:03:00.489948796 +0000 UTC m=+1067.902640969" Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.515914 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-5xxbt" podStartSLOduration=2.515896249 podStartE2EDuration="2.515896249s" podCreationTimestamp="2025-10-06 12:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:00.490754037 +0000 UTC m=+1067.903446220" watchObservedRunningTime="2025-10-06 12:03:00.515896249 +0000 UTC m=+1067.928588422" Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.789787 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5974c58885-pt76k"] Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.817671 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7pkpz"] Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.964354 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:03:00 crc kubenswrapper[4698]: I1006 12:03:00.981322 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-4v2z8"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.096307 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-l9vdh"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.132369 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nc7nk"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.133756 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-config\") pod \"b82fa017-f6c6-4b51-ab1f-abf36396d690\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.133815 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-swift-storage-0\") pod \"b82fa017-f6c6-4b51-ab1f-abf36396d690\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.133840 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5lhg\" (UniqueName: \"kubernetes.io/projected/b82fa017-f6c6-4b51-ab1f-abf36396d690-kube-api-access-g5lhg\") pod \"b82fa017-f6c6-4b51-ab1f-abf36396d690\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.133864 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-nb\") pod \"b82fa017-f6c6-4b51-ab1f-abf36396d690\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.133979 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-svc\") pod \"b82fa017-f6c6-4b51-ab1f-abf36396d690\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.134089 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-sb\") pod \"b82fa017-f6c6-4b51-ab1f-abf36396d690\" (UID: \"b82fa017-f6c6-4b51-ab1f-abf36396d690\") " Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.162223 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f748b7d89-lvwdc"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.199576 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b82fa017-f6c6-4b51-ab1f-abf36396d690" (UID: "b82fa017-f6c6-4b51-ab1f-abf36396d690"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.216915 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82fa017-f6c6-4b51-ab1f-abf36396d690-kube-api-access-g5lhg" (OuterVolumeSpecName: "kube-api-access-g5lhg") pod "b82fa017-f6c6-4b51-ab1f-abf36396d690" (UID: "b82fa017-f6c6-4b51-ab1f-abf36396d690"). InnerVolumeSpecName "kube-api-access-g5lhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.242340 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-config" (OuterVolumeSpecName: "config") pod "b82fa017-f6c6-4b51-ab1f-abf36396d690" (UID: "b82fa017-f6c6-4b51-ab1f-abf36396d690"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.244325 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.245414 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5lhg\" (UniqueName: \"kubernetes.io/projected/b82fa017-f6c6-4b51-ab1f-abf36396d690-kube-api-access-g5lhg\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.274080 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-4v2z8"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.289139 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rgglq"] Oct 06 12:03:01 crc kubenswrapper[4698]: E1006 12:03:01.289851 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82fa017-f6c6-4b51-ab1f-abf36396d690" containerName="init" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.291200 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82fa017-f6c6-4b51-ab1f-abf36396d690" containerName="init" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.291562 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82fa017-f6c6-4b51-ab1f-abf36396d690" containerName="init" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.301316 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.347433 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b82fa017-f6c6-4b51-ab1f-abf36396d690" (UID: "b82fa017-f6c6-4b51-ab1f-abf36396d690"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.361388 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.361425 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.382364 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b82fa017-f6c6-4b51-ab1f-abf36396d690" (UID: "b82fa017-f6c6-4b51-ab1f-abf36396d690"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.389315 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b82fa017-f6c6-4b51-ab1f-abf36396d690" (UID: "b82fa017-f6c6-4b51-ab1f-abf36396d690"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.401776 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rgglq"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.459395 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.462599 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.462794 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-config\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.462939 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx4hx\" (UniqueName: \"kubernetes.io/projected/f85e7a86-219c-4d1e-922c-8d8f4fec787d-kube-api-access-dx4hx\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.463113 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.463200 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.463259 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.463385 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.463452 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b82fa017-f6c6-4b51-ab1f-abf36396d690-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.528613 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5974c58885-pt76k"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.551703 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c494d646f-2f4j5"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.554249 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.566950 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"0b3304f0-cef5-447b-9b57-be2830595b5b","Type":"ContainerStarted","Data":"8dd15571cddcdc3050eb069c8fdb6a97d3c74ba6259b038488b9a56d5dd9153c"} Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.567680 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.572102 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.573887 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.580922 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.582400 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.582477 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-config\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.582582 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx4hx\" (UniqueName: \"kubernetes.io/projected/f85e7a86-219c-4d1e-922c-8d8f4fec787d-kube-api-access-dx4hx\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.582763 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.582833 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.582855 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.583792 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.584704 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.594859 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.148:9322/\": dial tcp 10.217.0.148:9322: connect: connection refused" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.596427 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.596613 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-config\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.597419 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5xxbt" event={"ID":"2c03557f-4b1f-4104-87a7-4a5880180c86","Type":"ContainerStarted","Data":"07b05c7b787ed15a0f44335b8a40c29369540e885d5f29b7d908f36778e6a8d2"} Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.598148 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c494d646f-2f4j5"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.598883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.601828 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.602038 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.602190 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-47rrc" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.616523 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" event={"ID":"b82fa017-f6c6-4b51-ab1f-abf36396d690","Type":"ContainerDied","Data":"ee18293e11b84b0af3f978565d398ba8228a639d5b9a8cf707add72cd9f47b9c"} Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.616569 4698 scope.go:117] "RemoveContainer" containerID="5fed0d0d3cae9afbf8e88743f0af9cc15d9cc45d69ad15f9ba4dcc2693672d17" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.617093 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-rgkkq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.622115 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.641538 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7pkpz" event={"ID":"828a84bc-95cc-448b-a84c-4ca894dd754b","Type":"ContainerStarted","Data":"445539ad5c9f9c411cb24bdafcb15af21d36b5dc26c6cf2aeb4de2235b997816"} Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.644342 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx4hx\" (UniqueName: \"kubernetes.io/projected/f85e7a86-219c-4d1e-922c-8d8f4fec787d-kube-api-access-dx4hx\") pod \"dnsmasq-dns-8b5c85b87-rgglq\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.655070 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f748b7d89-lvwdc" event={"ID":"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc","Type":"ContainerStarted","Data":"16c452e72ccdcc18a33f9b0ae875799f7ce211010a816760e64fed21e6e714af"} Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.679808 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" event={"ID":"4ce06b70-44c3-4df9-9195-ed30d283fca5","Type":"ContainerStarted","Data":"2f901ec1b94ed483f299300d0ba920229043d2b2fd25060721734c1efc20ff98"} Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.685310 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-config-data\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.685348 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.685413 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2743d36b-fb0f-4891-9675-c5f277104553-horizon-secret-key\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.685467 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-scripts\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.685991 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l9vdh" event={"ID":"d620584e-f9cd-432a-9f55-9aa1f1056766","Type":"ContainerStarted","Data":"70cda1faecc4ed4caa84e533bc9f4b68fcd531a86f40cec86907127b81a0ec37"} Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.686698 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-config-data\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.686740 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.686761 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2743d36b-fb0f-4891-9675-c5f277104553-logs\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.686791 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-logs\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.686819 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.686836 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-scripts\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.686854 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlkrd\" (UniqueName: \"kubernetes.io/projected/399e9af3-6208-4d49-aa4b-afeaf842ba08-kube-api-access-zlkrd\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.686886 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sqlx\" (UniqueName: \"kubernetes.io/projected/2743d36b-fb0f-4891-9675-c5f277104553-kube-api-access-5sqlx\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.696614 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nc7nk" event={"ID":"df1bd773-04e5-4524-a48e-b7a65c983a89","Type":"ContainerStarted","Data":"70855594300c70b467492f19741d942b1a9b1fb5c6e34895b494cb25f47cae74"} Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.703505 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5974c58885-pt76k" event={"ID":"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd","Type":"ContainerStarted","Data":"a01acd03fd3d93954efc3515be69da8611909f25e9c95ce5f86569bbf351834d"} Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.727154 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.729867 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.729849079 podStartE2EDuration="3.729849079s" podCreationTimestamp="2025-10-06 12:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:01.707956587 +0000 UTC m=+1069.120648760" watchObservedRunningTime="2025-10-06 12:03:01.729849079 +0000 UTC m=+1069.142541242" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.777871 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-rgkkq"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.788216 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-rgkkq"] Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.789719 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.789792 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2743d36b-fb0f-4891-9675-c5f277104553-logs\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.789837 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-logs\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.789877 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.789908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-scripts\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.789934 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlkrd\" (UniqueName: \"kubernetes.io/projected/399e9af3-6208-4d49-aa4b-afeaf842ba08-kube-api-access-zlkrd\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.789975 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sqlx\" (UniqueName: \"kubernetes.io/projected/2743d36b-fb0f-4891-9675-c5f277104553-kube-api-access-5sqlx\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.790275 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-config-data\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.790299 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.790323 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2743d36b-fb0f-4891-9675-c5f277104553-horizon-secret-key\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.790340 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-scripts\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.790393 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-config-data\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.790960 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2743d36b-fb0f-4891-9675-c5f277104553-logs\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.791363 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.791763 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-logs\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.793436 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.793571 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-config-data\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.799548 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-config-data\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.800717 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.802247 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-scripts\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.804900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-scripts\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.820779 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2743d36b-fb0f-4891-9675-c5f277104553-horizon-secret-key\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.835798 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sqlx\" (UniqueName: \"kubernetes.io/projected/2743d36b-fb0f-4891-9675-c5f277104553-kube-api-access-5sqlx\") pod \"horizon-6c494d646f-2f4j5\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.857529 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlkrd\" (UniqueName: \"kubernetes.io/projected/399e9af3-6208-4d49-aa4b-afeaf842ba08-kube-api-access-zlkrd\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.858459 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:01 crc kubenswrapper[4698]: I1006 12:03:01.930594 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:01 crc kubenswrapper[4698]: E1006 12:03:01.983532 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb82fa017_f6c6_4b51_ab1f_abf36396d690.slice/crio-ee18293e11b84b0af3f978565d398ba8228a639d5b9a8cf707add72cd9f47b9c\": RecentStats: unable to find data in memory cache]" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.010294 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.431682 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.435578 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.441310 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.509004 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.565677 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c494d646f-2f4j5"] Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.620055 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rgglq"] Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.622455 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.622542 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-logs\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.622601 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.622650 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.623699 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b95tj\" (UniqueName: \"kubernetes.io/projected/e7163b58-d460-4874-ab7d-826c9058165f-kube-api-access-b95tj\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.623736 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.623762 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.727454 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.727927 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.730356 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-logs\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.730419 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.730483 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.730660 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b95tj\" (UniqueName: \"kubernetes.io/projected/e7163b58-d460-4874-ab7d-826c9058165f-kube-api-access-b95tj\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.730706 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.730736 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.731198 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.731425 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-logs\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.743071 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.745465 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.749498 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.754433 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b95tj\" (UniqueName: \"kubernetes.io/projected/e7163b58-d460-4874-ab7d-826c9058165f-kube-api-access-b95tj\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.756839 4698 generic.go:334] "Generic (PLEG): container finished" podID="4ce06b70-44c3-4df9-9195-ed30d283fca5" containerID="84b7c83590f386d3cc3c5144f6eeaac99edbd1ecc5446c238dedc272976d05a9" exitCode=0 Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.757069 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" event={"ID":"4ce06b70-44c3-4df9-9195-ed30d283fca5","Type":"ContainerDied","Data":"84b7c83590f386d3cc3c5144f6eeaac99edbd1ecc5446c238dedc272976d05a9"} Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.769645 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api-log" containerID="cri-o://917d92bb97c301465120fb5118139389d174cf6e72d9b51d68e5b7115b526bf5" gracePeriod=30 Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.774099 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api" containerID="cri-o://8dd15571cddcdc3050eb069c8fdb6a97d3c74ba6259b038488b9a56d5dd9153c" gracePeriod=30 Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.815553 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.816332 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.148:9322/\": EOF" Oct 06 12:03:02 crc kubenswrapper[4698]: I1006 12:03:02.832359 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:03 crc kubenswrapper[4698]: I1006 12:03:03.067901 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:03 crc kubenswrapper[4698]: I1006 12:03:03.347734 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82fa017-f6c6-4b51-ab1f-abf36396d690" path="/var/lib/kubelet/pods/b82fa017-f6c6-4b51-ab1f-abf36396d690/volumes" Oct 06 12:03:03 crc kubenswrapper[4698]: I1006 12:03:03.787987 4698 generic.go:334] "Generic (PLEG): container finished" podID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerID="917d92bb97c301465120fb5118139389d174cf6e72d9b51d68e5b7115b526bf5" exitCode=143 Oct 06 12:03:03 crc kubenswrapper[4698]: I1006 12:03:03.788071 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"0b3304f0-cef5-447b-9b57-be2830595b5b","Type":"ContainerDied","Data":"917d92bb97c301465120fb5118139389d174cf6e72d9b51d68e5b7115b526bf5"} Oct 06 12:03:03 crc kubenswrapper[4698]: I1006 12:03:03.913899 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.122110 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.270599 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-svc\") pod \"4ce06b70-44c3-4df9-9195-ed30d283fca5\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.270667 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-nb\") pod \"4ce06b70-44c3-4df9-9195-ed30d283fca5\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.270772 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-swift-storage-0\") pod \"4ce06b70-44c3-4df9-9195-ed30d283fca5\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.270859 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-sb\") pod \"4ce06b70-44c3-4df9-9195-ed30d283fca5\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.270892 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-config\") pod \"4ce06b70-44c3-4df9-9195-ed30d283fca5\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.270973 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t98p9\" (UniqueName: \"kubernetes.io/projected/4ce06b70-44c3-4df9-9195-ed30d283fca5-kube-api-access-t98p9\") pod \"4ce06b70-44c3-4df9-9195-ed30d283fca5\" (UID: \"4ce06b70-44c3-4df9-9195-ed30d283fca5\") " Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.298893 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ce06b70-44c3-4df9-9195-ed30d283fca5" (UID: "4ce06b70-44c3-4df9-9195-ed30d283fca5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.305642 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4ce06b70-44c3-4df9-9195-ed30d283fca5" (UID: "4ce06b70-44c3-4df9-9195-ed30d283fca5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.308350 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce06b70-44c3-4df9-9195-ed30d283fca5-kube-api-access-t98p9" (OuterVolumeSpecName: "kube-api-access-t98p9") pod "4ce06b70-44c3-4df9-9195-ed30d283fca5" (UID: "4ce06b70-44c3-4df9-9195-ed30d283fca5"). InnerVolumeSpecName "kube-api-access-t98p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.313734 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ce06b70-44c3-4df9-9195-ed30d283fca5" (UID: "4ce06b70-44c3-4df9-9195-ed30d283fca5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.318543 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4ce06b70-44c3-4df9-9195-ed30d283fca5" (UID: "4ce06b70-44c3-4df9-9195-ed30d283fca5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.321314 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-config" (OuterVolumeSpecName: "config") pod "4ce06b70-44c3-4df9-9195-ed30d283fca5" (UID: "4ce06b70-44c3-4df9-9195-ed30d283fca5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.374643 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t98p9\" (UniqueName: \"kubernetes.io/projected/4ce06b70-44c3-4df9-9195-ed30d283fca5-kube-api-access-t98p9\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.374690 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.374706 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.374716 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.374727 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.374736 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce06b70-44c3-4df9-9195-ed30d283fca5-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.804525 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" event={"ID":"f85e7a86-219c-4d1e-922c-8d8f4fec787d","Type":"ContainerStarted","Data":"673c0d1a1939dd3a51a3e3260bc825854e1dead5e3529b7c68a38dcc2c093c3d"} Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.807188 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" event={"ID":"4ce06b70-44c3-4df9-9195-ed30d283fca5","Type":"ContainerDied","Data":"2f901ec1b94ed483f299300d0ba920229043d2b2fd25060721734c1efc20ff98"} Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.807245 4698 scope.go:117] "RemoveContainer" containerID="84b7c83590f386d3cc3c5144f6eeaac99edbd1ecc5446c238dedc272976d05a9" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.807383 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-4v2z8" Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.823054 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"399e9af3-6208-4d49-aa4b-afeaf842ba08","Type":"ContainerStarted","Data":"c3d60f0d99eb6beb08286ad000aa9df27b4c0efacd95d75c92843b40447bc941"} Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.829672 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c494d646f-2f4j5" event={"ID":"2743d36b-fb0f-4891-9675-c5f277104553","Type":"ContainerStarted","Data":"13086a187796515bb46d6471aa847d2d7e0bdd6519c20d43fc8b7e8277801434"} Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.837768 4698 generic.go:334] "Generic (PLEG): container finished" podID="a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" containerID="001abc23485f32f08cce21908ea37a8ab5c4b8a32a1885c412b12faf1b97a0a6" exitCode=0 Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.837926 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5p4n5" event={"ID":"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916","Type":"ContainerDied","Data":"001abc23485f32f08cce21908ea37a8ab5c4b8a32a1885c412b12faf1b97a0a6"} Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.893542 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-4v2z8"] Oct 06 12:03:04 crc kubenswrapper[4698]: I1006 12:03:04.903874 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-4v2z8"] Oct 06 12:03:05 crc kubenswrapper[4698]: I1006 12:03:05.361270 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ce06b70-44c3-4df9-9195-ed30d283fca5" path="/var/lib/kubelet/pods/4ce06b70-44c3-4df9-9195-ed30d283fca5/volumes" Oct 06 12:03:05 crc kubenswrapper[4698]: I1006 12:03:05.851073 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1","Type":"ContainerStarted","Data":"9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b"} Oct 06 12:03:05 crc kubenswrapper[4698]: I1006 12:03:05.859099 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0c9f1455-9d47-4d18-bcfe-5deb642ded6c","Type":"ContainerStarted","Data":"6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6"} Oct 06 12:03:05 crc kubenswrapper[4698]: I1006 12:03:05.866554 4698 generic.go:334] "Generic (PLEG): container finished" podID="f85e7a86-219c-4d1e-922c-8d8f4fec787d" containerID="a440125a4ee36863e59a145fd83617499560f4c6674d2bac782b892ea31d323e" exitCode=0 Oct 06 12:03:05 crc kubenswrapper[4698]: I1006 12:03:05.866651 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" event={"ID":"f85e7a86-219c-4d1e-922c-8d8f4fec787d","Type":"ContainerDied","Data":"a440125a4ee36863e59a145fd83617499560f4c6674d2bac782b892ea31d323e"} Oct 06 12:03:05 crc kubenswrapper[4698]: I1006 12:03:05.873551 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.748191091 podStartE2EDuration="7.873523085s" podCreationTimestamp="2025-10-06 12:02:58 +0000 UTC" firstStartedPulling="2025-10-06 12:02:59.928512571 +0000 UTC m=+1067.341204744" lastFinishedPulling="2025-10-06 12:03:05.053844565 +0000 UTC m=+1072.466536738" observedRunningTime="2025-10-06 12:03:05.870264515 +0000 UTC m=+1073.282956688" watchObservedRunningTime="2025-10-06 12:03:05.873523085 +0000 UTC m=+1073.286215258" Oct 06 12:03:05 crc kubenswrapper[4698]: I1006 12:03:05.909718 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.890155043 podStartE2EDuration="7.909698613s" podCreationTimestamp="2025-10-06 12:02:58 +0000 UTC" firstStartedPulling="2025-10-06 12:03:00.067430466 +0000 UTC m=+1067.480122639" lastFinishedPulling="2025-10-06 12:03:05.086974026 +0000 UTC m=+1072.499666209" observedRunningTime="2025-10-06 12:03:05.8962782 +0000 UTC m=+1073.308970373" watchObservedRunningTime="2025-10-06 12:03:05.909698613 +0000 UTC m=+1073.322390786" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.008833 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.332082 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.427031 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-combined-ca-bundle\") pod \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.427798 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-credential-keys\") pod \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.427840 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhjqc\" (UniqueName: \"kubernetes.io/projected/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-kube-api-access-mhjqc\") pod \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.427885 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-config-data\") pod \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.428061 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-fernet-keys\") pod \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.428087 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-scripts\") pod \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\" (UID: \"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916\") " Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.434382 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" (UID: "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.436717 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" (UID: "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.437746 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-kube-api-access-mhjqc" (OuterVolumeSpecName: "kube-api-access-mhjqc") pod "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" (UID: "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916"). InnerVolumeSpecName "kube-api-access-mhjqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.443369 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-scripts" (OuterVolumeSpecName: "scripts") pod "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" (UID: "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.458876 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" (UID: "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.463422 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-config-data" (OuterVolumeSpecName: "config-data") pod "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" (UID: "a6b2f2d9-e2b6-4845-9a36-7ab3609dd916"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.530483 4698 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.530513 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.530528 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.530608 4698 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.530631 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhjqc\" (UniqueName: \"kubernetes.io/projected/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-kube-api-access-mhjqc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.530639 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.889537 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"399e9af3-6208-4d49-aa4b-afeaf842ba08","Type":"ContainerStarted","Data":"143e4fa45644a4cddf36b41ced26fa12d1d5d2fe795c43ca08453ddbc310ccaf"} Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.901121 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e7163b58-d460-4874-ab7d-826c9058165f","Type":"ContainerStarted","Data":"97b3ae6aa321e6617a1f87ea802db76fa2378b6335b8202d7cc28144988e633a"} Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.935988 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5p4n5" event={"ID":"a6b2f2d9-e2b6-4845-9a36-7ab3609dd916","Type":"ContainerDied","Data":"cd77b6cf17e40b8a919928fa8f244cd17df71c4ce52c33651d4bd738fe9ab489"} Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.936066 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd77b6cf17e40b8a919928fa8f244cd17df71c4ce52c33651d4bd738fe9ab489" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.936175 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5p4n5" Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.965249 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" event={"ID":"f85e7a86-219c-4d1e-922c-8d8f4fec787d","Type":"ContainerStarted","Data":"876d99631ea0d4c187701f625a6c21323a1e72e51a0133b234876db86f1652f6"} Oct 06 12:03:06 crc kubenswrapper[4698]: I1006 12:03:06.965560 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.000004 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" podStartSLOduration=5.999970474 podStartE2EDuration="5.999970474s" podCreationTimestamp="2025-10-06 12:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:06.986122232 +0000 UTC m=+1074.398814405" watchObservedRunningTime="2025-10-06 12:03:06.999970474 +0000 UTC m=+1074.412662657" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.105593 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5p4n5"] Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.129362 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5p4n5"] Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.151058 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-t47dq"] Oct 06 12:03:07 crc kubenswrapper[4698]: E1006 12:03:07.151548 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce06b70-44c3-4df9-9195-ed30d283fca5" containerName="init" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.151562 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce06b70-44c3-4df9-9195-ed30d283fca5" containerName="init" Oct 06 12:03:07 crc kubenswrapper[4698]: E1006 12:03:07.151577 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" containerName="keystone-bootstrap" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.151584 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" containerName="keystone-bootstrap" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.151794 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" containerName="keystone-bootstrap" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.151808 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce06b70-44c3-4df9-9195-ed30d283fca5" containerName="init" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.152559 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.156717 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.157282 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.158761 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.158787 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dw6fp" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.163152 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t47dq"] Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.354373 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-scripts\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.354438 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-fernet-keys\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.354456 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-credential-keys\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.354482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcz7w\" (UniqueName: \"kubernetes.io/projected/1d7c054b-7ade-402e-b389-d07ced69c957-kube-api-access-pcz7w\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.354503 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-config-data\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.354531 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-combined-ca-bundle\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.358830 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b2f2d9-e2b6-4845-9a36-7ab3609dd916" path="/var/lib/kubelet/pods/a6b2f2d9-e2b6-4845-9a36-7ab3609dd916/volumes" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.456908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-scripts\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.457000 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-fernet-keys\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.457035 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-credential-keys\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.457066 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcz7w\" (UniqueName: \"kubernetes.io/projected/1d7c054b-7ade-402e-b389-d07ced69c957-kube-api-access-pcz7w\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.457085 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-config-data\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.457113 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-combined-ca-bundle\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.469776 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-combined-ca-bundle\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.471127 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-scripts\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.475256 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-fernet-keys\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.481135 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-credential-keys\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.484824 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcz7w\" (UniqueName: \"kubernetes.io/projected/1d7c054b-7ade-402e-b389-d07ced69c957-kube-api-access-pcz7w\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.494692 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-config-data\") pod \"keystone-bootstrap-t47dq\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:07 crc kubenswrapper[4698]: I1006 12:03:07.505171 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:08 crc kubenswrapper[4698]: I1006 12:03:08.846790 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Oct 06 12:03:08 crc kubenswrapper[4698]: I1006 12:03:08.846827 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Oct 06 12:03:08 crc kubenswrapper[4698]: I1006 12:03:08.892902 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Oct 06 12:03:08 crc kubenswrapper[4698]: I1006 12:03:08.956261 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.148:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 12:03:09 crc kubenswrapper[4698]: I1006 12:03:09.005663 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"399e9af3-6208-4d49-aa4b-afeaf842ba08","Type":"ContainerStarted","Data":"334d37f19ea0cddd9a01cfd2a3c0e7539c41eaf72ecb11a80e3682db7b16503a"} Oct 06 12:03:09 crc kubenswrapper[4698]: I1006 12:03:09.012592 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e7163b58-d460-4874-ab7d-826c9058165f","Type":"ContainerStarted","Data":"443da923541f5471f03c66c9be003265f31fbf177e75e92e74ad3c682740175b"} Oct 06 12:03:09 crc kubenswrapper[4698]: I1006 12:03:09.029598 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.029577905 podStartE2EDuration="8.029577905s" podCreationTimestamp="2025-10-06 12:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:09.025651548 +0000 UTC m=+1076.438343721" watchObservedRunningTime="2025-10-06 12:03:09.029577905 +0000 UTC m=+1076.442270078" Oct 06 12:03:09 crc kubenswrapper[4698]: I1006 12:03:09.052192 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Oct 06 12:03:09 crc kubenswrapper[4698]: I1006 12:03:09.105130 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Oct 06 12:03:09 crc kubenswrapper[4698]: I1006 12:03:09.117760 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:09 crc kubenswrapper[4698]: I1006 12:03:09.180252 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:09 crc kubenswrapper[4698]: I1006 12:03:09.226267 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.148:9322/\": read tcp 10.217.0.2:43474->10.217.0.148:9322: read: connection reset by peer" Oct 06 12:03:10 crc kubenswrapper[4698]: I1006 12:03:10.032220 4698 generic.go:334] "Generic (PLEG): container finished" podID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerID="8dd15571cddcdc3050eb069c8fdb6a97d3c74ba6259b038488b9a56d5dd9153c" exitCode=0 Oct 06 12:03:10 crc kubenswrapper[4698]: I1006 12:03:10.032305 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"0b3304f0-cef5-447b-9b57-be2830595b5b","Type":"ContainerDied","Data":"8dd15571cddcdc3050eb069c8fdb6a97d3c74ba6259b038488b9a56d5dd9153c"} Oct 06 12:03:10 crc kubenswrapper[4698]: I1006 12:03:10.033047 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:10 crc kubenswrapper[4698]: I1006 12:03:10.070134 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:10 crc kubenswrapper[4698]: I1006 12:03:10.134049 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:03:11 crc kubenswrapper[4698]: I1006 12:03:11.043381 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" containerName="watcher-applier" containerID="cri-o://9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" gracePeriod=30 Oct 06 12:03:11 crc kubenswrapper[4698]: I1006 12:03:11.624796 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:11 crc kubenswrapper[4698]: I1006 12:03:11.625058 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="399e9af3-6208-4d49-aa4b-afeaf842ba08" containerName="glance-log" containerID="cri-o://143e4fa45644a4cddf36b41ced26fa12d1d5d2fe795c43ca08453ddbc310ccaf" gracePeriod=30 Oct 06 12:03:11 crc kubenswrapper[4698]: I1006 12:03:11.625212 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="399e9af3-6208-4d49-aa4b-afeaf842ba08" containerName="glance-httpd" containerID="cri-o://334d37f19ea0cddd9a01cfd2a3c0e7539c41eaf72ecb11a80e3682db7b16503a" gracePeriod=30 Oct 06 12:03:11 crc kubenswrapper[4698]: I1006 12:03:11.695899 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:11 crc kubenswrapper[4698]: I1006 12:03:11.728290 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:11 crc kubenswrapper[4698]: I1006 12:03:11.799522 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-52h5d"] Oct 06 12:03:11 crc kubenswrapper[4698]: I1006 12:03:11.799818 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerName="dnsmasq-dns" containerID="cri-o://ae27d4cfa04b34f13c1644ac065d511cc78048696707568a218c79f588260ffb" gracePeriod=10 Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.040548 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f748b7d89-lvwdc"] Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.083105 4698 generic.go:334] "Generic (PLEG): container finished" podID="399e9af3-6208-4d49-aa4b-afeaf842ba08" containerID="334d37f19ea0cddd9a01cfd2a3c0e7539c41eaf72ecb11a80e3682db7b16503a" exitCode=0 Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.083145 4698 generic.go:334] "Generic (PLEG): container finished" podID="399e9af3-6208-4d49-aa4b-afeaf842ba08" containerID="143e4fa45644a4cddf36b41ced26fa12d1d5d2fe795c43ca08453ddbc310ccaf" exitCode=143 Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.083220 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"399e9af3-6208-4d49-aa4b-afeaf842ba08","Type":"ContainerDied","Data":"334d37f19ea0cddd9a01cfd2a3c0e7539c41eaf72ecb11a80e3682db7b16503a"} Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.083257 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"399e9af3-6208-4d49-aa4b-afeaf842ba08","Type":"ContainerDied","Data":"143e4fa45644a4cddf36b41ced26fa12d1d5d2fe795c43ca08453ddbc310ccaf"} Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.094231 4698 generic.go:334] "Generic (PLEG): container finished" podID="0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" exitCode=0 Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.094306 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1","Type":"ContainerDied","Data":"9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b"} Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.095674 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-654cf8498d-s5tdp"] Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.097430 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.099769 4698 generic.go:334] "Generic (PLEG): container finished" podID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerID="ae27d4cfa04b34f13c1644ac065d511cc78048696707568a218c79f588260ffb" exitCode=0 Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.099953 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="0c9f1455-9d47-4d18-bcfe-5deb642ded6c" containerName="watcher-decision-engine" containerID="cri-o://6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6" gracePeriod=30 Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.100251 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" event={"ID":"513a0662-fda2-4dd1-b6c7-132f646ffd9d","Type":"ContainerDied","Data":"ae27d4cfa04b34f13c1644ac065d511cc78048696707568a218c79f588260ffb"} Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.101411 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.107195 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-654cf8498d-s5tdp"] Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.149683 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c494d646f-2f4j5"] Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.235942 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-tls-certs\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.236282 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-secret-key\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.236429 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-config-data\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.236533 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-combined-ca-bundle\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.236674 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lzkl\" (UniqueName: \"kubernetes.io/projected/18ae0d1c-2545-4122-b2d9-3380fd017840-kube-api-access-6lzkl\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.236789 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-scripts\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.236976 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ae0d1c-2545-4122-b2d9-3380fd017840-logs\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.246517 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-849d766464-jl8th"] Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.248361 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.262747 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-849d766464-jl8th"] Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.338942 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-config-data\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.339036 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-combined-ca-bundle\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.339059 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lzkl\" (UniqueName: \"kubernetes.io/projected/18ae0d1c-2545-4122-b2d9-3380fd017840-kube-api-access-6lzkl\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.339081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-scripts\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.339140 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ae0d1c-2545-4122-b2d9-3380fd017840-logs\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.339174 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-tls-certs\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.339211 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-secret-key\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.341207 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-config-data\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.341790 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ae0d1c-2545-4122-b2d9-3380fd017840-logs\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.342268 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-scripts\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.349895 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-tls-certs\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.352109 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-secret-key\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.354372 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-combined-ca-bundle\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.357678 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lzkl\" (UniqueName: \"kubernetes.io/projected/18ae0d1c-2545-4122-b2d9-3380fd017840-kube-api-access-6lzkl\") pod \"horizon-654cf8498d-s5tdp\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.432616 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.441390 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-combined-ca-bundle\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.441534 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-horizon-secret-key\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.441853 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-scripts\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.441900 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-horizon-tls-certs\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.441953 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-logs\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.441998 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wgqn\" (UniqueName: \"kubernetes.io/projected/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-kube-api-access-2wgqn\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.442055 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-config-data\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.544060 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-horizon-secret-key\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.544554 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-scripts\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.544596 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-horizon-tls-certs\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.544662 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-logs\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.544701 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wgqn\" (UniqueName: \"kubernetes.io/projected/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-kube-api-access-2wgqn\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.544723 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-config-data\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.544769 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-combined-ca-bundle\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.546449 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-logs\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.547280 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-scripts\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.548352 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-config-data\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.554614 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-horizon-secret-key\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.565661 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wgqn\" (UniqueName: \"kubernetes.io/projected/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-kube-api-access-2wgqn\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.568203 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-combined-ca-bundle\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.569562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4da0ff-f7c0-47d2-b204-69c0da4ab453-horizon-tls-certs\") pod \"horizon-849d766464-jl8th\" (UID: \"2b4da0ff-f7c0-47d2-b204-69c0da4ab453\") " pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:12 crc kubenswrapper[4698]: I1006 12:03:12.584472 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:13 crc kubenswrapper[4698]: E1006 12:03:13.850766 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:13 crc kubenswrapper[4698]: E1006 12:03:13.851852 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:13 crc kubenswrapper[4698]: E1006 12:03:13.852116 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:13 crc kubenswrapper[4698]: E1006 12:03:13.852141 4698 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" containerName="watcher-applier" Oct 06 12:03:13 crc kubenswrapper[4698]: I1006 12:03:13.915258 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.148:9322/\": dial tcp 10.217.0.148:9322: connect: connection refused" Oct 06 12:03:14 crc kubenswrapper[4698]: I1006 12:03:14.583677 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Oct 06 12:03:15 crc kubenswrapper[4698]: I1006 12:03:15.140916 4698 generic.go:334] "Generic (PLEG): container finished" podID="0c9f1455-9d47-4d18-bcfe-5deb642ded6c" containerID="6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6" exitCode=0 Oct 06 12:03:15 crc kubenswrapper[4698]: I1006 12:03:15.141073 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0c9f1455-9d47-4d18-bcfe-5deb642ded6c","Type":"ContainerDied","Data":"6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6"} Oct 06 12:03:16 crc kubenswrapper[4698]: E1006 12:03:16.765766 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Oct 06 12:03:16 crc kubenswrapper[4698]: E1006 12:03:16.766625 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57ch67h5d4h4h655h5c7h76h697hcdh576h64bhcbh5fbh689h5f6h5bdh545h5b9h644h56hd4h699h5d8hd6h565h89h89h6dhfdh5b6hd5h66bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fkgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a7fb1575-bbc3-4d9f-a0ce-31652f935cac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:03:18 crc kubenswrapper[4698]: E1006 12:03:18.846283 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:18 crc kubenswrapper[4698]: E1006 12:03:18.847569 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:18 crc kubenswrapper[4698]: E1006 12:03:18.848298 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:18 crc kubenswrapper[4698]: E1006 12:03:18.848398 4698 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" containerName="watcher-applier" Oct 06 12:03:19 crc kubenswrapper[4698]: I1006 12:03:19.583684 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Oct 06 12:03:22 crc kubenswrapper[4698]: E1006 12:03:22.600897 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 06 12:03:22 crc kubenswrapper[4698]: E1006 12:03:22.601772 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5dfh5ffh55ch5c6h58h67hb4h98hb8h5dch74h5dfh66h5b7hb4h674h6chbh676h79h577h679h5c5h55fh5fchffh88h5c9h597h668h64chb7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dcgfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7f748b7d89-lvwdc_openstack(5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:03:22 crc kubenswrapper[4698]: E1006 12:03:22.605629 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7f748b7d89-lvwdc" podUID="5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc" Oct 06 12:03:22 crc kubenswrapper[4698]: E1006 12:03:22.612959 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 06 12:03:22 crc kubenswrapper[4698]: E1006 12:03:22.613200 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n688hc4h6fhd4h8hbch64dh58ch696h55dhcch697h99h64fh9ch59dh5f9h5cfhd6h555h677h676h7h55ch57hb9h555h59bhc8hdfh85h5bcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sqlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6c494d646f-2f4j5_openstack(2743d36b-fb0f-4891-9675-c5f277104553): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:03:22 crc kubenswrapper[4698]: E1006 12:03:22.615635 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6c494d646f-2f4j5" podUID="2743d36b-fb0f-4891-9675-c5f277104553" Oct 06 12:03:22 crc kubenswrapper[4698]: E1006 12:03:22.640700 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 06 12:03:22 crc kubenswrapper[4698]: E1006 12:03:22.640888 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n87h5f6hb6hbch5b5h5c6h58fhch5chd4h574h7dh687h677h676hb5h58hf4h8h5cbh59dh559h5b6h68h74hdch6dh58h9fh59bh59bh556q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xj7gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5974c58885-pt76k_openstack(44606cce-177c-4ec7-a58b-ce3f7c2ce8dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:03:22 crc kubenswrapper[4698]: E1006 12:03:22.644365 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5974c58885-pt76k" podUID="44606cce-177c-4ec7-a58b-ce3f7c2ce8dd" Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.764359 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.910680 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b3304f0-cef5-447b-9b57-be2830595b5b-logs\") pod \"0b3304f0-cef5-447b-9b57-be2830595b5b\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.911286 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-combined-ca-bundle\") pod \"0b3304f0-cef5-447b-9b57-be2830595b5b\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.911415 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxsr9\" (UniqueName: \"kubernetes.io/projected/0b3304f0-cef5-447b-9b57-be2830595b5b-kube-api-access-hxsr9\") pod \"0b3304f0-cef5-447b-9b57-be2830595b5b\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.911458 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-config-data\") pod \"0b3304f0-cef5-447b-9b57-be2830595b5b\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.911589 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-custom-prometheus-ca\") pod \"0b3304f0-cef5-447b-9b57-be2830595b5b\" (UID: \"0b3304f0-cef5-447b-9b57-be2830595b5b\") " Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.918376 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b3304f0-cef5-447b-9b57-be2830595b5b-logs" (OuterVolumeSpecName: "logs") pod "0b3304f0-cef5-447b-9b57-be2830595b5b" (UID: "0b3304f0-cef5-447b-9b57-be2830595b5b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.934691 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3304f0-cef5-447b-9b57-be2830595b5b-kube-api-access-hxsr9" (OuterVolumeSpecName: "kube-api-access-hxsr9") pod "0b3304f0-cef5-447b-9b57-be2830595b5b" (UID: "0b3304f0-cef5-447b-9b57-be2830595b5b"). InnerVolumeSpecName "kube-api-access-hxsr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.957911 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b3304f0-cef5-447b-9b57-be2830595b5b" (UID: "0b3304f0-cef5-447b-9b57-be2830595b5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.973044 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "0b3304f0-cef5-447b-9b57-be2830595b5b" (UID: "0b3304f0-cef5-447b-9b57-be2830595b5b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:22 crc kubenswrapper[4698]: I1006 12:03:22.977003 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-config-data" (OuterVolumeSpecName: "config-data") pod "0b3304f0-cef5-447b-9b57-be2830595b5b" (UID: "0b3304f0-cef5-447b-9b57-be2830595b5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.015897 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.016066 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxsr9\" (UniqueName: \"kubernetes.io/projected/0b3304f0-cef5-447b-9b57-be2830595b5b-kube-api-access-hxsr9\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.016086 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.016099 4698 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0b3304f0-cef5-447b-9b57-be2830595b5b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.016111 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b3304f0-cef5-447b-9b57-be2830595b5b-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.256293 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"0b3304f0-cef5-447b-9b57-be2830595b5b","Type":"ContainerDied","Data":"3a73116adca764d88e4b3e23214bc0e86a1aada64051c50753d041f114925982"} Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.256443 4698 scope.go:117] "RemoveContainer" containerID="8dd15571cddcdc3050eb069c8fdb6a97d3c74ba6259b038488b9a56d5dd9153c" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.256811 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.394036 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.404144 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.465258 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:23 crc kubenswrapper[4698]: E1006 12:03:23.467163 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api-log" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.467779 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api-log" Oct 06 12:03:23 crc kubenswrapper[4698]: E1006 12:03:23.467916 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.467968 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.469465 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.469577 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api-log" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.472071 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.477449 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.483325 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.637134 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bb2r\" (UniqueName: \"kubernetes.io/projected/9559acba-7cb8-4602-b32d-51385773c9db-kube-api-access-9bb2r\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.637199 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9559acba-7cb8-4602-b32d-51385773c9db-logs\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.637292 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-config-data\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.637337 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.637379 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.739083 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.739161 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.739201 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bb2r\" (UniqueName: \"kubernetes.io/projected/9559acba-7cb8-4602-b32d-51385773c9db-kube-api-access-9bb2r\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.739226 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9559acba-7cb8-4602-b32d-51385773c9db-logs\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.739301 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-config-data\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.740869 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9559acba-7cb8-4602-b32d-51385773c9db-logs\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.746284 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.746296 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.753451 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-config-data\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.760577 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bb2r\" (UniqueName: \"kubernetes.io/projected/9559acba-7cb8-4602-b32d-51385773c9db-kube-api-access-9bb2r\") pod \"watcher-api-0\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.793579 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Oct 06 12:03:23 crc kubenswrapper[4698]: E1006 12:03:23.847156 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:23 crc kubenswrapper[4698]: E1006 12:03:23.847590 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:23 crc kubenswrapper[4698]: E1006 12:03:23.848073 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:23 crc kubenswrapper[4698]: E1006 12:03:23.848170 4698 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" containerName="watcher-applier" Oct 06 12:03:23 crc kubenswrapper[4698]: I1006 12:03:23.915584 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.148:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 12:03:25 crc kubenswrapper[4698]: I1006 12:03:25.234809 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:03:25 crc kubenswrapper[4698]: I1006 12:03:25.234889 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:03:25 crc kubenswrapper[4698]: I1006 12:03:25.344817 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3304f0-cef5-447b-9b57-be2830595b5b" path="/var/lib/kubelet/pods/0b3304f0-cef5-447b-9b57-be2830595b5b/volumes" Oct 06 12:03:28 crc kubenswrapper[4698]: E1006 12:03:28.846936 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:28 crc kubenswrapper[4698]: E1006 12:03:28.848807 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:28 crc kubenswrapper[4698]: E1006 12:03:28.849503 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Oct 06 12:03:28 crc kubenswrapper[4698]: E1006 12:03:28.849546 4698 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" containerName="watcher-applier" Oct 06 12:03:29 crc kubenswrapper[4698]: E1006 12:03:29.116577 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6 is running failed: container process not found" containerID="6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Oct 06 12:03:29 crc kubenswrapper[4698]: E1006 12:03:29.131153 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6 is running failed: container process not found" containerID="6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Oct 06 12:03:29 crc kubenswrapper[4698]: E1006 12:03:29.131760 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6 is running failed: container process not found" containerID="6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Oct 06 12:03:29 crc kubenswrapper[4698]: E1006 12:03:29.131830 4698 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="0c9f1455-9d47-4d18-bcfe-5deb642ded6c" containerName="watcher-decision-engine" Oct 06 12:03:29 crc kubenswrapper[4698]: I1006 12:03:29.583534 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Oct 06 12:03:29 crc kubenswrapper[4698]: I1006 12:03:29.583960 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.360488 4698 generic.go:334] "Generic (PLEG): container finished" podID="2c03557f-4b1f-4104-87a7-4a5880180c86" containerID="07b05c7b787ed15a0f44335b8a40c29369540e885d5f29b7d908f36778e6a8d2" exitCode=0 Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.360556 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5xxbt" event={"ID":"2c03557f-4b1f-4104-87a7-4a5880180c86","Type":"ContainerDied","Data":"07b05c7b787ed15a0f44335b8a40c29369540e885d5f29b7d908f36778e6a8d2"} Oct 06 12:03:31 crc kubenswrapper[4698]: E1006 12:03:31.437782 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Oct 06 12:03:31 crc kubenswrapper[4698]: E1006 12:03:31.438274 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnglf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-nc7nk_openstack(df1bd773-04e5-4524-a48e-b7a65c983a89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:03:31 crc kubenswrapper[4698]: E1006 12:03:31.439509 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-nc7nk" podUID="df1bd773-04e5-4524-a48e-b7a65c983a89" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.574183 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.582144 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.589077 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.607918 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.638346 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.658391 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.666790 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.724961 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-custom-prometheus-ca\") pod \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725057 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-swift-storage-0\") pod \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725142 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-scripts\") pod \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725175 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-sb\") pod \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725245 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtd2p\" (UniqueName: \"kubernetes.io/projected/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-kube-api-access-gtd2p\") pod \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725271 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-config-data\") pod \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725325 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-config-data\") pod \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725361 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-logs\") pod \"399e9af3-6208-4d49-aa4b-afeaf842ba08\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725391 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sqlx\" (UniqueName: \"kubernetes.io/projected/2743d36b-fb0f-4891-9675-c5f277104553-kube-api-access-5sqlx\") pod \"2743d36b-fb0f-4891-9675-c5f277104553\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725465 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-logs\") pod \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725498 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjxlk\" (UniqueName: \"kubernetes.io/projected/513a0662-fda2-4dd1-b6c7-132f646ffd9d-kube-api-access-kjxlk\") pod \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725527 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj7gz\" (UniqueName: \"kubernetes.io/projected/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-kube-api-access-xj7gz\") pod \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725558 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2743d36b-fb0f-4891-9675-c5f277104553-horizon-secret-key\") pod \"2743d36b-fb0f-4891-9675-c5f277104553\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725610 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-config-data\") pod \"2743d36b-fb0f-4891-9675-c5f277104553\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725643 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-config-data\") pod \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725670 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-logs\") pod \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725702 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-nb\") pod \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725749 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-logs\") pod \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725788 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-horizon-secret-key\") pod \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725816 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-horizon-secret-key\") pod \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725847 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2743d36b-fb0f-4891-9675-c5f277104553-logs\") pod \"2743d36b-fb0f-4891-9675-c5f277104553\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725887 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-scripts\") pod \"399e9af3-6208-4d49-aa4b-afeaf842ba08\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.725938 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-scripts\") pod \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\" (UID: \"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726077 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-combined-ca-bundle\") pod \"399e9af3-6208-4d49-aa4b-afeaf842ba08\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726113 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcgfz\" (UniqueName: \"kubernetes.io/projected/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-kube-api-access-dcgfz\") pod \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726200 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlkrd\" (UniqueName: \"kubernetes.io/projected/399e9af3-6208-4d49-aa4b-afeaf842ba08-kube-api-access-zlkrd\") pod \"399e9af3-6208-4d49-aa4b-afeaf842ba08\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726235 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-config\") pod \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726299 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87qjw\" (UniqueName: \"kubernetes.io/projected/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-kube-api-access-87qjw\") pod \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726337 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-scripts\") pod \"2743d36b-fb0f-4891-9675-c5f277104553\" (UID: \"2743d36b-fb0f-4891-9675-c5f277104553\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726374 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-config-data\") pod \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726401 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-httpd-run\") pod \"399e9af3-6208-4d49-aa4b-afeaf842ba08\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726451 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"399e9af3-6208-4d49-aa4b-afeaf842ba08\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726489 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-combined-ca-bundle\") pod \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\" (UID: \"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726516 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-combined-ca-bundle\") pod \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\" (UID: \"0c9f1455-9d47-4d18-bcfe-5deb642ded6c\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726557 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-config-data\") pod \"399e9af3-6208-4d49-aa4b-afeaf842ba08\" (UID: \"399e9af3-6208-4d49-aa4b-afeaf842ba08\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726599 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-logs\") pod \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\" (UID: \"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.726630 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-svc\") pod \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\" (UID: \"513a0662-fda2-4dd1-b6c7-132f646ffd9d\") " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.727756 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-scripts" (OuterVolumeSpecName: "scripts") pod "5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc" (UID: "5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.731101 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-logs" (OuterVolumeSpecName: "logs") pod "399e9af3-6208-4d49-aa4b-afeaf842ba08" (UID: "399e9af3-6208-4d49-aa4b-afeaf842ba08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.733561 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-scripts" (OuterVolumeSpecName: "scripts") pod "44606cce-177c-4ec7-a58b-ce3f7c2ce8dd" (UID: "44606cce-177c-4ec7-a58b-ce3f7c2ce8dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.736920 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-kube-api-access-87qjw" (OuterVolumeSpecName: "kube-api-access-87qjw") pod "0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" (UID: "0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1"). InnerVolumeSpecName "kube-api-access-87qjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.737192 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-kube-api-access-xj7gz" (OuterVolumeSpecName: "kube-api-access-xj7gz") pod "44606cce-177c-4ec7-a58b-ce3f7c2ce8dd" (UID: "44606cce-177c-4ec7-a58b-ce3f7c2ce8dd"). InnerVolumeSpecName "kube-api-access-xj7gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.737704 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-kube-api-access-gtd2p" (OuterVolumeSpecName: "kube-api-access-gtd2p") pod "0c9f1455-9d47-4d18-bcfe-5deb642ded6c" (UID: "0c9f1455-9d47-4d18-bcfe-5deb642ded6c"). InnerVolumeSpecName "kube-api-access-gtd2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.739964 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-logs" (OuterVolumeSpecName: "logs") pod "0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" (UID: "0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.741368 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-config-data" (OuterVolumeSpecName: "config-data") pod "5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc" (UID: "5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.742355 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-config-data" (OuterVolumeSpecName: "config-data") pod "44606cce-177c-4ec7-a58b-ce3f7c2ce8dd" (UID: "44606cce-177c-4ec7-a58b-ce3f7c2ce8dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.742550 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2743d36b-fb0f-4891-9675-c5f277104553-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2743d36b-fb0f-4891-9675-c5f277104553" (UID: "2743d36b-fb0f-4891-9675-c5f277104553"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.744524 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2743d36b-fb0f-4891-9675-c5f277104553-logs" (OuterVolumeSpecName: "logs") pod "2743d36b-fb0f-4891-9675-c5f277104553" (UID: "2743d36b-fb0f-4891-9675-c5f277104553"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.745183 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-logs" (OuterVolumeSpecName: "logs") pod "44606cce-177c-4ec7-a58b-ce3f7c2ce8dd" (UID: "44606cce-177c-4ec7-a58b-ce3f7c2ce8dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.748288 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-config-data" (OuterVolumeSpecName: "config-data") pod "2743d36b-fb0f-4891-9675-c5f277104553" (UID: "2743d36b-fb0f-4891-9675-c5f277104553"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.748497 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-logs" (OuterVolumeSpecName: "logs") pod "0c9f1455-9d47-4d18-bcfe-5deb642ded6c" (UID: "0c9f1455-9d47-4d18-bcfe-5deb642ded6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.748612 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-scripts" (OuterVolumeSpecName: "scripts") pod "2743d36b-fb0f-4891-9675-c5f277104553" (UID: "2743d36b-fb0f-4891-9675-c5f277104553"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.748790 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "399e9af3-6208-4d49-aa4b-afeaf842ba08" (UID: "399e9af3-6208-4d49-aa4b-afeaf842ba08"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.749283 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "44606cce-177c-4ec7-a58b-ce3f7c2ce8dd" (UID: "44606cce-177c-4ec7-a58b-ce3f7c2ce8dd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.749887 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-logs" (OuterVolumeSpecName: "logs") pod "5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc" (UID: "5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.755632 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399e9af3-6208-4d49-aa4b-afeaf842ba08-kube-api-access-zlkrd" (OuterVolumeSpecName: "kube-api-access-zlkrd") pod "399e9af3-6208-4d49-aa4b-afeaf842ba08" (UID: "399e9af3-6208-4d49-aa4b-afeaf842ba08"). InnerVolumeSpecName "kube-api-access-zlkrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.756463 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-scripts" (OuterVolumeSpecName: "scripts") pod "399e9af3-6208-4d49-aa4b-afeaf842ba08" (UID: "399e9af3-6208-4d49-aa4b-afeaf842ba08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.756617 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2743d36b-fb0f-4891-9675-c5f277104553-kube-api-access-5sqlx" (OuterVolumeSpecName: "kube-api-access-5sqlx") pod "2743d36b-fb0f-4891-9675-c5f277104553" (UID: "2743d36b-fb0f-4891-9675-c5f277104553"). InnerVolumeSpecName "kube-api-access-5sqlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.766300 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc" (UID: "5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.766347 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-kube-api-access-dcgfz" (OuterVolumeSpecName: "kube-api-access-dcgfz") pod "5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc" (UID: "5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc"). InnerVolumeSpecName "kube-api-access-dcgfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.797248 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "399e9af3-6208-4d49-aa4b-afeaf842ba08" (UID: "399e9af3-6208-4d49-aa4b-afeaf842ba08"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830648 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcgfz\" (UniqueName: \"kubernetes.io/projected/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-kube-api-access-dcgfz\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830691 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlkrd\" (UniqueName: \"kubernetes.io/projected/399e9af3-6208-4d49-aa4b-afeaf842ba08-kube-api-access-zlkrd\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830706 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87qjw\" (UniqueName: \"kubernetes.io/projected/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-kube-api-access-87qjw\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830718 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830730 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830771 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830783 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830797 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830810 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtd2p\" (UniqueName: \"kubernetes.io/projected/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-kube-api-access-gtd2p\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830821 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830831 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/399e9af3-6208-4d49-aa4b-afeaf842ba08-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830842 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sqlx\" (UniqueName: \"kubernetes.io/projected/2743d36b-fb0f-4891-9675-c5f277104553-kube-api-access-5sqlx\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830852 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830865 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj7gz\" (UniqueName: \"kubernetes.io/projected/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-kube-api-access-xj7gz\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830875 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2743d36b-fb0f-4891-9675-c5f277104553-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830886 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2743d36b-fb0f-4891-9675-c5f277104553-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830897 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830907 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830922 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830932 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830943 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830957 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2743d36b-fb0f-4891-9675-c5f277104553-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830968 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.830981 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.839572 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513a0662-fda2-4dd1-b6c7-132f646ffd9d-kube-api-access-kjxlk" (OuterVolumeSpecName: "kube-api-access-kjxlk") pod "513a0662-fda2-4dd1-b6c7-132f646ffd9d" (UID: "513a0662-fda2-4dd1-b6c7-132f646ffd9d"). InnerVolumeSpecName "kube-api-access-kjxlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.861957 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "0c9f1455-9d47-4d18-bcfe-5deb642ded6c" (UID: "0c9f1455-9d47-4d18-bcfe-5deb642ded6c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.872615 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "399e9af3-6208-4d49-aa4b-afeaf842ba08" (UID: "399e9af3-6208-4d49-aa4b-afeaf842ba08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.875515 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "513a0662-fda2-4dd1-b6c7-132f646ffd9d" (UID: "513a0662-fda2-4dd1-b6c7-132f646ffd9d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.906106 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.938152 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.938188 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.938200 4698 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.938210 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjxlk\" (UniqueName: \"kubernetes.io/projected/513a0662-fda2-4dd1-b6c7-132f646ffd9d-kube-api-access-kjxlk\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.938219 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.938407 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "513a0662-fda2-4dd1-b6c7-132f646ffd9d" (UID: "513a0662-fda2-4dd1-b6c7-132f646ffd9d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.941994 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" (UID: "0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.952501 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c9f1455-9d47-4d18-bcfe-5deb642ded6c" (UID: "0c9f1455-9d47-4d18-bcfe-5deb642ded6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.960491 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-config-data" (OuterVolumeSpecName: "config-data") pod "399e9af3-6208-4d49-aa4b-afeaf842ba08" (UID: "399e9af3-6208-4d49-aa4b-afeaf842ba08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.965974 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "513a0662-fda2-4dd1-b6c7-132f646ffd9d" (UID: "513a0662-fda2-4dd1-b6c7-132f646ffd9d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.966236 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-config-data" (OuterVolumeSpecName: "config-data") pod "0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" (UID: "0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.967602 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "513a0662-fda2-4dd1-b6c7-132f646ffd9d" (UID: "513a0662-fda2-4dd1-b6c7-132f646ffd9d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.976365 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-config" (OuterVolumeSpecName: "config") pod "513a0662-fda2-4dd1-b6c7-132f646ffd9d" (UID: "513a0662-fda2-4dd1-b6c7-132f646ffd9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:31 crc kubenswrapper[4698]: I1006 12:03:31.978868 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-config-data" (OuterVolumeSpecName: "config-data") pod "0c9f1455-9d47-4d18-bcfe-5deb642ded6c" (UID: "0c9f1455-9d47-4d18-bcfe-5deb642ded6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.040540 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.040768 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.040778 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.040788 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.040801 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9f1455-9d47-4d18-bcfe-5deb642ded6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.040811 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399e9af3-6208-4d49-aa4b-afeaf842ba08-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.040819 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.040829 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.040841 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/513a0662-fda2-4dd1-b6c7-132f646ffd9d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.371937 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1","Type":"ContainerDied","Data":"944cbb078af45fe10c94dd71df6e21e9a25c5cbdbc220f248bb5814182654b70"} Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.371965 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.373644 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f748b7d89-lvwdc" event={"ID":"5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc","Type":"ContainerDied","Data":"16c452e72ccdcc18a33f9b0ae875799f7ce211010a816760e64fed21e6e714af"} Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.373692 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f748b7d89-lvwdc" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.376990 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" event={"ID":"513a0662-fda2-4dd1-b6c7-132f646ffd9d","Type":"ContainerDied","Data":"02bef7f3068cc468bf6c5238cba1fd2e96555f246beb3268fd2041a1a9a38ae7"} Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.377197 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.385663 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"399e9af3-6208-4d49-aa4b-afeaf842ba08","Type":"ContainerDied","Data":"c3d60f0d99eb6beb08286ad000aa9df27b4c0efacd95d75c92843b40447bc941"} Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.385693 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.387655 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5974c58885-pt76k" event={"ID":"44606cce-177c-4ec7-a58b-ce3f7c2ce8dd","Type":"ContainerDied","Data":"a01acd03fd3d93954efc3515be69da8611909f25e9c95ce5f86569bbf351834d"} Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.387746 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5974c58885-pt76k" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.402896 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0c9f1455-9d47-4d18-bcfe-5deb642ded6c","Type":"ContainerDied","Data":"dc71cc476528c2681832ec4f0be7c2fbbfb26aaca8e6a8e3076247e383a824b6"} Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.402964 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.407864 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c494d646f-2f4j5" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.408154 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c494d646f-2f4j5" event={"ID":"2743d36b-fb0f-4891-9675-c5f277104553","Type":"ContainerDied","Data":"13086a187796515bb46d6471aa847d2d7e0bdd6519c20d43fc8b7e8277801434"} Oct 06 12:03:32 crc kubenswrapper[4698]: E1006 12:03:32.415644 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-nc7nk" podUID="df1bd773-04e5-4524-a48e-b7a65c983a89" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.462401 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-52h5d"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.472259 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-52h5d"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.488404 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.497385 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.508644 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: E1006 12:03:32.509180 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399e9af3-6208-4d49-aa4b-afeaf842ba08" containerName="glance-log" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509200 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="399e9af3-6208-4d49-aa4b-afeaf842ba08" containerName="glance-log" Oct 06 12:03:32 crc kubenswrapper[4698]: E1006 12:03:32.509218 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9f1455-9d47-4d18-bcfe-5deb642ded6c" containerName="watcher-decision-engine" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509227 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9f1455-9d47-4d18-bcfe-5deb642ded6c" containerName="watcher-decision-engine" Oct 06 12:03:32 crc kubenswrapper[4698]: E1006 12:03:32.509253 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" containerName="watcher-applier" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509259 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" containerName="watcher-applier" Oct 06 12:03:32 crc kubenswrapper[4698]: E1006 12:03:32.509275 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399e9af3-6208-4d49-aa4b-afeaf842ba08" containerName="glance-httpd" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509282 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="399e9af3-6208-4d49-aa4b-afeaf842ba08" containerName="glance-httpd" Oct 06 12:03:32 crc kubenswrapper[4698]: E1006 12:03:32.509290 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerName="dnsmasq-dns" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509297 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerName="dnsmasq-dns" Oct 06 12:03:32 crc kubenswrapper[4698]: E1006 12:03:32.509315 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerName="init" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509321 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerName="init" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509511 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerName="dnsmasq-dns" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509523 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="399e9af3-6208-4d49-aa4b-afeaf842ba08" containerName="glance-log" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509531 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="399e9af3-6208-4d49-aa4b-afeaf842ba08" containerName="glance-httpd" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509548 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9f1455-9d47-4d18-bcfe-5deb642ded6c" containerName="watcher-decision-engine" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.509558 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" containerName="watcher-applier" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.510403 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.518570 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.521777 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.669546 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5974c58885-pt76k"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.682166 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac2f2ee-e5e6-4fb9-a527-47976859efe7-config-data\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.695951 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eac2f2ee-e5e6-4fb9-a527-47976859efe7-logs\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.696043 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac2f2ee-e5e6-4fb9-a527-47976859efe7-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.696176 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npctn\" (UniqueName: \"kubernetes.io/projected/eac2f2ee-e5e6-4fb9-a527-47976859efe7-kube-api-access-npctn\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.713247 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5974c58885-pt76k"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.744478 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.765890 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.798406 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac2f2ee-e5e6-4fb9-a527-47976859efe7-config-data\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.798460 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac2f2ee-e5e6-4fb9-a527-47976859efe7-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.798481 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eac2f2ee-e5e6-4fb9-a527-47976859efe7-logs\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.798512 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npctn\" (UniqueName: \"kubernetes.io/projected/eac2f2ee-e5e6-4fb9-a527-47976859efe7-kube-api-access-npctn\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.802435 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eac2f2ee-e5e6-4fb9-a527-47976859efe7-logs\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.813114 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.814935 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.824380 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f748b7d89-lvwdc"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.824788 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac2f2ee-e5e6-4fb9-a527-47976859efe7-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.828293 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npctn\" (UniqueName: \"kubernetes.io/projected/eac2f2ee-e5e6-4fb9-a527-47976859efe7-kube-api-access-npctn\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.831170 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f748b7d89-lvwdc"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.832553 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.838486 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.844407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac2f2ee-e5e6-4fb9-a527-47976859efe7-config-data\") pod \"watcher-applier-0\" (UID: \"eac2f2ee-e5e6-4fb9-a527-47976859efe7\") " pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.861147 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c494d646f-2f4j5"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.886249 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.893320 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c494d646f-2f4j5"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.900681 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.900749 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.900848 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b760929b-e89c-4e54-8506-e6a61a100d84-logs\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.900906 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67mct\" (UniqueName: \"kubernetes.io/projected/b760929b-e89c-4e54-8506-e6a61a100d84-kube-api-access-67mct\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.901068 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.924248 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.938907 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.951329 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.955778 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.960484 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.960713 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 06 12:03:32 crc kubenswrapper[4698]: I1006 12:03:32.988984 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.013030 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.013123 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.013328 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b760929b-e89c-4e54-8506-e6a61a100d84-logs\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.013379 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67mct\" (UniqueName: \"kubernetes.io/projected/b760929b-e89c-4e54-8506-e6a61a100d84-kube-api-access-67mct\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.013503 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.046962 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b760929b-e89c-4e54-8506-e6a61a100d84-logs\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.057203 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.057648 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.057733 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67mct\" (UniqueName: \"kubernetes.io/projected/b760929b-e89c-4e54-8506-e6a61a100d84-kube-api-access-67mct\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.059100 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.115398 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.115452 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.115532 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.115575 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.115626 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.115656 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mjg\" (UniqueName: \"kubernetes.io/projected/69148131-5b31-411f-b3be-729ee6530f9f-kube-api-access-h7mjg\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.115936 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.116161 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-logs\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.218065 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.218155 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.218200 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.218225 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7mjg\" (UniqueName: \"kubernetes.io/projected/69148131-5b31-411f-b3be-729ee6530f9f-kube-api-access-h7mjg\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.218299 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.218877 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-logs\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.218892 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.219387 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-logs\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.219499 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.219521 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.220120 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.223815 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.224674 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.225525 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.225905 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.230937 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.237734 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7mjg\" (UniqueName: \"kubernetes.io/projected/69148131-5b31-411f-b3be-729ee6530f9f-kube-api-access-h7mjg\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.248757 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.291468 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.347174 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1" path="/var/lib/kubelet/pods/0926f844-0dfe-4322-ad5e-a6bc0d7dc2a1/volumes" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.347801 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c9f1455-9d47-4d18-bcfe-5deb642ded6c" path="/var/lib/kubelet/pods/0c9f1455-9d47-4d18-bcfe-5deb642ded6c/volumes" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.348444 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2743d36b-fb0f-4891-9675-c5f277104553" path="/var/lib/kubelet/pods/2743d36b-fb0f-4891-9675-c5f277104553/volumes" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.350721 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="399e9af3-6208-4d49-aa4b-afeaf842ba08" path="/var/lib/kubelet/pods/399e9af3-6208-4d49-aa4b-afeaf842ba08/volumes" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.351664 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44606cce-177c-4ec7-a58b-ce3f7c2ce8dd" path="/var/lib/kubelet/pods/44606cce-177c-4ec7-a58b-ce3f7c2ce8dd/volumes" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.352093 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" path="/var/lib/kubelet/pods/513a0662-fda2-4dd1-b6c7-132f646ffd9d/volumes" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.353542 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc" path="/var/lib/kubelet/pods/5ee8a0f4-f1b8-4642-bbeb-936c7c491ebc/volumes" Oct 06 12:03:33 crc kubenswrapper[4698]: E1006 12:03:33.744504 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 06 12:03:33 crc kubenswrapper[4698]: E1006 12:03:33.745304 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26kzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-l9vdh_openstack(d620584e-f9cd-432a-9f55-9aa1f1056766): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 12:03:33 crc kubenswrapper[4698]: E1006 12:03:33.746641 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-l9vdh" podUID="d620584e-f9cd-432a-9f55-9aa1f1056766" Oct 06 12:03:33 crc kubenswrapper[4698]: I1006 12:03:33.971667 4698 scope.go:117] "RemoveContainer" containerID="917d92bb97c301465120fb5118139389d174cf6e72d9b51d68e5b7115b526bf5" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.120956 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.201884 4698 scope.go:117] "RemoveContainer" containerID="9c6cbc7f952b74405d5df3b2d7bc7f378b901e52f4c50d9ffbce183448c8236b" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.245635 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-combined-ca-bundle\") pod \"2c03557f-4b1f-4104-87a7-4a5880180c86\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.246124 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gt4s\" (UniqueName: \"kubernetes.io/projected/2c03557f-4b1f-4104-87a7-4a5880180c86-kube-api-access-6gt4s\") pod \"2c03557f-4b1f-4104-87a7-4a5880180c86\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.246231 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-config\") pod \"2c03557f-4b1f-4104-87a7-4a5880180c86\" (UID: \"2c03557f-4b1f-4104-87a7-4a5880180c86\") " Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.253824 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c03557f-4b1f-4104-87a7-4a5880180c86-kube-api-access-6gt4s" (OuterVolumeSpecName: "kube-api-access-6gt4s") pod "2c03557f-4b1f-4104-87a7-4a5880180c86" (UID: "2c03557f-4b1f-4104-87a7-4a5880180c86"). InnerVolumeSpecName "kube-api-access-6gt4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.258922 4698 scope.go:117] "RemoveContainer" containerID="ae27d4cfa04b34f13c1644ac065d511cc78048696707568a218c79f588260ffb" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.287060 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-config" (OuterVolumeSpecName: "config") pod "2c03557f-4b1f-4104-87a7-4a5880180c86" (UID: "2c03557f-4b1f-4104-87a7-4a5880180c86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.290811 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c03557f-4b1f-4104-87a7-4a5880180c86" (UID: "2c03557f-4b1f-4104-87a7-4a5880180c86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.349781 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.349826 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c03557f-4b1f-4104-87a7-4a5880180c86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.349843 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gt4s\" (UniqueName: \"kubernetes.io/projected/2c03557f-4b1f-4104-87a7-4a5880180c86-kube-api-access-6gt4s\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.350913 4698 scope.go:117] "RemoveContainer" containerID="c77c3b8201f9c50736cf0d52edd120629f090882795d77f2e1c2300975940185" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.497317 4698 scope.go:117] "RemoveContainer" containerID="334d37f19ea0cddd9a01cfd2a3c0e7539c41eaf72ecb11a80e3682db7b16503a" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.499660 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t47dq"] Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.537770 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5xxbt" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.537768 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5xxbt" event={"ID":"2c03557f-4b1f-4104-87a7-4a5880180c86","Type":"ContainerDied","Data":"8a02fb7f326627fb038fce58f00640c1120dca73b5016e614005fa67856ba365"} Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.537831 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a02fb7f326627fb038fce58f00640c1120dca73b5016e614005fa67856ba365" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.584138 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-52h5d" podUID="513a0662-fda2-4dd1-b6c7-132f646ffd9d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Oct 06 12:03:34 crc kubenswrapper[4698]: E1006 12:03:34.585714 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-l9vdh" podUID="d620584e-f9cd-432a-9f55-9aa1f1056766" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.618883 4698 scope.go:117] "RemoveContainer" containerID="143e4fa45644a4cddf36b41ced26fa12d1d5d2fe795c43ca08453ddbc310ccaf" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.641902 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-654cf8498d-s5tdp"] Oct 06 12:03:34 crc kubenswrapper[4698]: W1006 12:03:34.646194 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18ae0d1c_2545_4122_b2d9_3380fd017840.slice/crio-5ac0d95b1e8ccf90bffb375c338d1b3280984180b4be219e5946e1113869d440 WatchSource:0}: Error finding container 5ac0d95b1e8ccf90bffb375c338d1b3280984180b4be219e5946e1113869d440: Status 404 returned error can't find the container with id 5ac0d95b1e8ccf90bffb375c338d1b3280984180b4be219e5946e1113869d440 Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.660316 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-849d766464-jl8th"] Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.706601 4698 scope.go:117] "RemoveContainer" containerID="6b3d54298caabdfa0dbc3804667acead78fc9d07ae20a148909ac36410cf1cb6" Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.762158 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.855871 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Oct 06 12:03:34 crc kubenswrapper[4698]: I1006 12:03:34.989838 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:03:35 crc kubenswrapper[4698]: W1006 12:03:35.018709 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb760929b_e89c_4e54_8506_e6a61a100d84.slice/crio-dfb147fba3f67dc002b2a30033434ab6bb1b37afa4fc20ab1bf34a7f17f2d7fb WatchSource:0}: Error finding container dfb147fba3f67dc002b2a30033434ab6bb1b37afa4fc20ab1bf34a7f17f2d7fb: Status 404 returned error can't find the container with id dfb147fba3f67dc002b2a30033434ab6bb1b37afa4fc20ab1bf34a7f17f2d7fb Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.076550 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.432781 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-smmk4"] Oct 06 12:03:35 crc kubenswrapper[4698]: E1006 12:03:35.433742 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c03557f-4b1f-4104-87a7-4a5880180c86" containerName="neutron-db-sync" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.433756 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c03557f-4b1f-4104-87a7-4a5880180c86" containerName="neutron-db-sync" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.433948 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c03557f-4b1f-4104-87a7-4a5880180c86" containerName="neutron-db-sync" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.434994 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.447118 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-smmk4"] Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.529211 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-798cdc9cb4-kt9cg"] Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.533879 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.538928 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.539207 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.539333 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.539445 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6nchg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.545879 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-798cdc9cb4-kt9cg"] Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.600678 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7pkpz" event={"ID":"828a84bc-95cc-448b-a84c-4ca894dd754b","Type":"ContainerStarted","Data":"ab3c038cfeb76cf8ab1c4eaf406ac4c2175f38a1e89181c54bfd90c4801d7302"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.605596 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.605661 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg9j6\" (UniqueName: \"kubernetes.io/projected/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-kube-api-access-tg9j6\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.605727 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-config\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.605759 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.605839 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.605886 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.609790 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"eac2f2ee-e5e6-4fb9-a527-47976859efe7","Type":"ContainerStarted","Data":"436822c924f6e14e636476ca0157bdedcea3d24f9a2b6fefd83665f61f086112"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.610078 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"eac2f2ee-e5e6-4fb9-a527-47976859efe7","Type":"ContainerStarted","Data":"e1db0f2ca78d1d565592ef6958fe6f288dcdb600856ee4fe9429ac379e972a98"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.620494 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b760929b-e89c-4e54-8506-e6a61a100d84","Type":"ContainerStarted","Data":"cf434a5fd4b064e31ddd2d029b4719de2f7d3a0251f98dec49f01db760bfd68c"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.620839 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b760929b-e89c-4e54-8506-e6a61a100d84","Type":"ContainerStarted","Data":"dfb147fba3f67dc002b2a30033434ab6bb1b37afa4fc20ab1bf34a7f17f2d7fb"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.626407 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148131-5b31-411f-b3be-729ee6530f9f","Type":"ContainerStarted","Data":"25f4f23b2a7862ec85d21359e3e90c8a7e93137da0da4f66a7449cf4ec772ce9"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.678439 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7fb1575-bbc3-4d9f-a0ce-31652f935cac","Type":"ContainerStarted","Data":"8c1e954d910aac1280114fe56b674d776984fd67a4f198347c001010d95a1dfa"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.685148 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-654cf8498d-s5tdp" event={"ID":"18ae0d1c-2545-4122-b2d9-3380fd017840","Type":"ContainerStarted","Data":"5ac0d95b1e8ccf90bffb375c338d1b3280984180b4be219e5946e1113869d440"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.699137 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.699125402 podStartE2EDuration="3.699125402s" podCreationTimestamp="2025-10-06 12:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:35.697967553 +0000 UTC m=+1103.110659726" watchObservedRunningTime="2025-10-06 12:03:35.699125402 +0000 UTC m=+1103.111817575" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.699997 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e7163b58-d460-4874-ab7d-826c9058165f","Type":"ContainerStarted","Data":"bf6f949620b3e856ff8c4a6d26c92594a9054ffdb47a251d25b10dfe10dd46b4"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.700308 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e7163b58-d460-4874-ab7d-826c9058165f" containerName="glance-log" containerID="cri-o://443da923541f5471f03c66c9be003265f31fbf177e75e92e74ad3c682740175b" gracePeriod=30 Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.700709 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e7163b58-d460-4874-ab7d-826c9058165f" containerName="glance-httpd" containerID="cri-o://bf6f949620b3e856ff8c4a6d26c92594a9054ffdb47a251d25b10dfe10dd46b4" gracePeriod=30 Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.701675 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7pkpz" podStartSLOduration=7.07986998 podStartE2EDuration="37.701669504s" podCreationTimestamp="2025-10-06 12:02:58 +0000 UTC" firstStartedPulling="2025-10-06 12:03:00.810076867 +0000 UTC m=+1068.222769040" lastFinishedPulling="2025-10-06 12:03:31.431876391 +0000 UTC m=+1098.844568564" observedRunningTime="2025-10-06 12:03:35.669094097 +0000 UTC m=+1103.081786270" watchObservedRunningTime="2025-10-06 12:03:35.701669504 +0000 UTC m=+1103.114361677" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.708252 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6btz\" (UniqueName: \"kubernetes.io/projected/71c9462d-5711-493d-ad40-ac0e5ff9d037-kube-api-access-f6btz\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.708775 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-ovndb-tls-certs\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.708820 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg9j6\" (UniqueName: \"kubernetes.io/projected/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-kube-api-access-tg9j6\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.708899 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-config\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.708929 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-httpd-config\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.708963 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.709083 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-config\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.709140 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.709192 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-combined-ca-bundle\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.709235 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.709321 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.712934 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-config\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.713077 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.714722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.715724 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.716530 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.722387 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.722366049 podStartE2EDuration="3.722366049s" podCreationTimestamp="2025-10-06 12:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:35.716146367 +0000 UTC m=+1103.128838540" watchObservedRunningTime="2025-10-06 12:03:35.722366049 +0000 UTC m=+1103.135058212" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.730379 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t47dq" event={"ID":"1d7c054b-7ade-402e-b389-d07ced69c957","Type":"ContainerStarted","Data":"e15ede615d1039c28da6e28604566807ffb0d03e5e8a810496e36f004b855fa7"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.730439 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t47dq" event={"ID":"1d7c054b-7ade-402e-b389-d07ced69c957","Type":"ContainerStarted","Data":"2da7a049924c78268fe3122b9ba5538ab4861a9a29515ae45354656acd34ae36"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.756184 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-849d766464-jl8th" event={"ID":"2b4da0ff-f7c0-47d2-b204-69c0da4ab453","Type":"ContainerStarted","Data":"f65929c664212b4fc7c800f5749c78ba871299955d571cccb5ace9fde8ff061d"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.786881 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg9j6\" (UniqueName: \"kubernetes.io/projected/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-kube-api-access-tg9j6\") pod \"dnsmasq-dns-84b966f6c9-smmk4\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.798236 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9559acba-7cb8-4602-b32d-51385773c9db","Type":"ContainerStarted","Data":"b03bec7e62c24f1dc68bc562e78ff33bbf7bf998ca28f24a468d2bfb2c93d36d"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.798298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9559acba-7cb8-4602-b32d-51385773c9db","Type":"ContainerStarted","Data":"d7a3f50f3fd2a604dc2498eb746a4943cd7e8dc5de53a1027cdfab593b607d8d"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.798308 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9559acba-7cb8-4602-b32d-51385773c9db","Type":"ContainerStarted","Data":"b154b74aadafbd5d7f62e969531ff64ade2ce7f5a417c6fc08c443eeabb520dc"} Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.799484 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.804811 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=34.804786483 podStartE2EDuration="34.804786483s" podCreationTimestamp="2025-10-06 12:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:35.767465582 +0000 UTC m=+1103.180157755" watchObservedRunningTime="2025-10-06 12:03:35.804786483 +0000 UTC m=+1103.217478656" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.810962 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-combined-ca-bundle\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.811056 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6btz\" (UniqueName: \"kubernetes.io/projected/71c9462d-5711-493d-ad40-ac0e5ff9d037-kube-api-access-f6btz\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.811081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-ovndb-tls-certs\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.811152 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-httpd-config\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.811234 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-config\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.821484 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-config\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.837512 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-httpd-config\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.856627 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-t47dq" podStartSLOduration=28.856604919 podStartE2EDuration="28.856604919s" podCreationTimestamp="2025-10-06 12:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:35.796109941 +0000 UTC m=+1103.208802114" watchObservedRunningTime="2025-10-06 12:03:35.856604919 +0000 UTC m=+1103.269297092" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.858599 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6btz\" (UniqueName: \"kubernetes.io/projected/71c9462d-5711-493d-ad40-ac0e5ff9d037-kube-api-access-f6btz\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.860027 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-combined-ca-bundle\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.868465 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=12.868289606 podStartE2EDuration="12.868289606s" podCreationTimestamp="2025-10-06 12:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:35.821886881 +0000 UTC m=+1103.234579054" watchObservedRunningTime="2025-10-06 12:03:35.868289606 +0000 UTC m=+1103.280981779" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.876812 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-ovndb-tls-certs\") pod \"neutron-798cdc9cb4-kt9cg\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.911275 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:35 crc kubenswrapper[4698]: I1006 12:03:35.948095 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:36 crc kubenswrapper[4698]: I1006 12:03:36.720862 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-smmk4"] Oct 06 12:03:36 crc kubenswrapper[4698]: W1006 12:03:36.875091 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab4c28e0_c76b_4ee0_8bed_cbea4379cdf1.slice/crio-4d0ab84cd9da2c872657c462d78e09e63e8b4b681696459a7404cc64ba537b2c WatchSource:0}: Error finding container 4d0ab84cd9da2c872657c462d78e09e63e8b4b681696459a7404cc64ba537b2c: Status 404 returned error can't find the container with id 4d0ab84cd9da2c872657c462d78e09e63e8b4b681696459a7404cc64ba537b2c Oct 06 12:03:36 crc kubenswrapper[4698]: I1006 12:03:36.880394 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-849d766464-jl8th" event={"ID":"2b4da0ff-f7c0-47d2-b204-69c0da4ab453","Type":"ContainerStarted","Data":"3e8f31df52c28980ac01fe950818134999cf4a56cf89b1e5628d510ac6e77682"} Oct 06 12:03:36 crc kubenswrapper[4698]: I1006 12:03:36.882680 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-654cf8498d-s5tdp" event={"ID":"18ae0d1c-2545-4122-b2d9-3380fd017840","Type":"ContainerStarted","Data":"5fd6a5bd12a195df40342fdf1541abf8c63ee4e04bcc6f0c8b900c346830590c"} Oct 06 12:03:36 crc kubenswrapper[4698]: I1006 12:03:36.896448 4698 generic.go:334] "Generic (PLEG): container finished" podID="e7163b58-d460-4874-ab7d-826c9058165f" containerID="bf6f949620b3e856ff8c4a6d26c92594a9054ffdb47a251d25b10dfe10dd46b4" exitCode=0 Oct 06 12:03:36 crc kubenswrapper[4698]: I1006 12:03:36.896485 4698 generic.go:334] "Generic (PLEG): container finished" podID="e7163b58-d460-4874-ab7d-826c9058165f" containerID="443da923541f5471f03c66c9be003265f31fbf177e75e92e74ad3c682740175b" exitCode=143 Oct 06 12:03:36 crc kubenswrapper[4698]: I1006 12:03:36.896479 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e7163b58-d460-4874-ab7d-826c9058165f","Type":"ContainerDied","Data":"bf6f949620b3e856ff8c4a6d26c92594a9054ffdb47a251d25b10dfe10dd46b4"} Oct 06 12:03:36 crc kubenswrapper[4698]: I1006 12:03:36.896541 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e7163b58-d460-4874-ab7d-826c9058165f","Type":"ContainerDied","Data":"443da923541f5471f03c66c9be003265f31fbf177e75e92e74ad3c682740175b"} Oct 06 12:03:36 crc kubenswrapper[4698]: I1006 12:03:36.901698 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148131-5b31-411f-b3be-729ee6530f9f","Type":"ContainerStarted","Data":"86c5bb37c206b2dd1c0aa83d1b825d8a1c283fd9a75e783d8cc44f302f1cbf94"} Oct 06 12:03:36 crc kubenswrapper[4698]: I1006 12:03:36.955562 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:36 crc kubenswrapper[4698]: I1006 12:03:36.963637 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-798cdc9cb4-kt9cg"] Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.067886 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-combined-ca-bundle\") pod \"e7163b58-d460-4874-ab7d-826c9058165f\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.068105 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-logs\") pod \"e7163b58-d460-4874-ab7d-826c9058165f\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.068182 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e7163b58-d460-4874-ab7d-826c9058165f\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.068233 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-config-data\") pod \"e7163b58-d460-4874-ab7d-826c9058165f\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.068260 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-scripts\") pod \"e7163b58-d460-4874-ab7d-826c9058165f\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.068337 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b95tj\" (UniqueName: \"kubernetes.io/projected/e7163b58-d460-4874-ab7d-826c9058165f-kube-api-access-b95tj\") pod \"e7163b58-d460-4874-ab7d-826c9058165f\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.068485 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-httpd-run\") pod \"e7163b58-d460-4874-ab7d-826c9058165f\" (UID: \"e7163b58-d460-4874-ab7d-826c9058165f\") " Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.076753 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-logs" (OuterVolumeSpecName: "logs") pod "e7163b58-d460-4874-ab7d-826c9058165f" (UID: "e7163b58-d460-4874-ab7d-826c9058165f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.081368 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e7163b58-d460-4874-ab7d-826c9058165f" (UID: "e7163b58-d460-4874-ab7d-826c9058165f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.085527 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7163b58-d460-4874-ab7d-826c9058165f-kube-api-access-b95tj" (OuterVolumeSpecName: "kube-api-access-b95tj") pod "e7163b58-d460-4874-ab7d-826c9058165f" (UID: "e7163b58-d460-4874-ab7d-826c9058165f"). InnerVolumeSpecName "kube-api-access-b95tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.085655 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "e7163b58-d460-4874-ab7d-826c9058165f" (UID: "e7163b58-d460-4874-ab7d-826c9058165f"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.090405 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-scripts" (OuterVolumeSpecName: "scripts") pod "e7163b58-d460-4874-ab7d-826c9058165f" (UID: "e7163b58-d460-4874-ab7d-826c9058165f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.154488 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-config-data" (OuterVolumeSpecName: "config-data") pod "e7163b58-d460-4874-ab7d-826c9058165f" (UID: "e7163b58-d460-4874-ab7d-826c9058165f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.158215 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7163b58-d460-4874-ab7d-826c9058165f" (UID: "e7163b58-d460-4874-ab7d-826c9058165f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.172530 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.172578 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.172628 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.172643 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.172654 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7163b58-d460-4874-ab7d-826c9058165f-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.172667 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b95tj\" (UniqueName: \"kubernetes.io/projected/e7163b58-d460-4874-ab7d-826c9058165f-kube-api-access-b95tj\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.172681 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e7163b58-d460-4874-ab7d-826c9058165f-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.232940 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.275912 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.887389 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.937607 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148131-5b31-411f-b3be-729ee6530f9f","Type":"ContainerStarted","Data":"f584419d25b819f114823ad54571c894825e0ac230c6b2a162ba40bfd4286c8d"} Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.955227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798cdc9cb4-kt9cg" event={"ID":"71c9462d-5711-493d-ad40-ac0e5ff9d037","Type":"ContainerStarted","Data":"0d805ab4ee4c425219de6611052e93ca87eca2634313a8be38c4af653a5dfec6"} Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.955283 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798cdc9cb4-kt9cg" event={"ID":"71c9462d-5711-493d-ad40-ac0e5ff9d037","Type":"ContainerStarted","Data":"85553e367074951654f57cb2480328cbb9daebbf19074de634ae3d05627cf049"} Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.955294 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798cdc9cb4-kt9cg" event={"ID":"71c9462d-5711-493d-ad40-ac0e5ff9d037","Type":"ContainerStarted","Data":"f25007dbd98f675c276b13647d55d26bdb21c69e555ab82adbbe194323b2ef64"} Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.962669 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.973653 4698 generic.go:334] "Generic (PLEG): container finished" podID="ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" containerID="445034ced48b5fd303fb9dea7f2c9fb3f8fdf5a390a71671459dbaed5c374e8f" exitCode=0 Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.973718 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" event={"ID":"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1","Type":"ContainerDied","Data":"445034ced48b5fd303fb9dea7f2c9fb3f8fdf5a390a71671459dbaed5c374e8f"} Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.973745 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" event={"ID":"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1","Type":"ContainerStarted","Data":"4d0ab84cd9da2c872657c462d78e09e63e8b4b681696459a7404cc64ba537b2c"} Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.978203 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-849d766464-jl8th" event={"ID":"2b4da0ff-f7c0-47d2-b204-69c0da4ab453","Type":"ContainerStarted","Data":"c114e3b12fb0eb686ec51f57e57a914adc7bfb3f6d5b7a42c9eccd341dd7d0e0"} Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.978668 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.978581994 podStartE2EDuration="5.978581994s" podCreationTimestamp="2025-10-06 12:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:37.960988753 +0000 UTC m=+1105.373680926" watchObservedRunningTime="2025-10-06 12:03:37.978581994 +0000 UTC m=+1105.391274187" Oct 06 12:03:37 crc kubenswrapper[4698]: I1006 12:03:37.981799 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-654cf8498d-s5tdp" event={"ID":"18ae0d1c-2545-4122-b2d9-3380fd017840","Type":"ContainerStarted","Data":"6ac119b788d458fe7ac1c4ff4e0504c0c1bcf69720b6056366a73205c669289d"} Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.029388 4698 generic.go:334] "Generic (PLEG): container finished" podID="828a84bc-95cc-448b-a84c-4ca894dd754b" containerID="ab3c038cfeb76cf8ab1c4eaf406ac4c2175f38a1e89181c54bfd90c4801d7302" exitCode=0 Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.029509 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7pkpz" event={"ID":"828a84bc-95cc-448b-a84c-4ca894dd754b","Type":"ContainerDied","Data":"ab3c038cfeb76cf8ab1c4eaf406ac4c2175f38a1e89181c54bfd90c4801d7302"} Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.068442 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.069886 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-798cdc9cb4-kt9cg" podStartSLOduration=3.069853744 podStartE2EDuration="3.069853744s" podCreationTimestamp="2025-10-06 12:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:38.018226242 +0000 UTC m=+1105.430918425" watchObservedRunningTime="2025-10-06 12:03:38.069853744 +0000 UTC m=+1105.482545917" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.069971 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e7163b58-d460-4874-ab7d-826c9058165f","Type":"ContainerDied","Data":"97b3ae6aa321e6617a1f87ea802db76fa2378b6335b8202d7cc28144988e633a"} Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.075408 4698 scope.go:117] "RemoveContainer" containerID="bf6f949620b3e856ff8c4a6d26c92594a9054ffdb47a251d25b10dfe10dd46b4" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.124344 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-849d766464-jl8th" podStartSLOduration=25.501516846 podStartE2EDuration="26.124323415s" podCreationTimestamp="2025-10-06 12:03:12 +0000 UTC" firstStartedPulling="2025-10-06 12:03:34.664450818 +0000 UTC m=+1102.077142991" lastFinishedPulling="2025-10-06 12:03:35.287257397 +0000 UTC m=+1102.699949560" observedRunningTime="2025-10-06 12:03:38.108901808 +0000 UTC m=+1105.521593981" watchObservedRunningTime="2025-10-06 12:03:38.124323415 +0000 UTC m=+1105.537015588" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.156923 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-654cf8498d-s5tdp" podStartSLOduration=25.282566784 podStartE2EDuration="26.156905971s" podCreationTimestamp="2025-10-06 12:03:12 +0000 UTC" firstStartedPulling="2025-10-06 12:03:34.665267737 +0000 UTC m=+1102.077959910" lastFinishedPulling="2025-10-06 12:03:35.539606924 +0000 UTC m=+1102.952299097" observedRunningTime="2025-10-06 12:03:38.149804978 +0000 UTC m=+1105.562497151" watchObservedRunningTime="2025-10-06 12:03:38.156905971 +0000 UTC m=+1105.569598134" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.186851 4698 scope.go:117] "RemoveContainer" containerID="443da923541f5471f03c66c9be003265f31fbf177e75e92e74ad3c682740175b" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.206428 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.238135 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.294899 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:38 crc kubenswrapper[4698]: E1006 12:03:38.295699 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7163b58-d460-4874-ab7d-826c9058165f" containerName="glance-log" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.295743 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7163b58-d460-4874-ab7d-826c9058165f" containerName="glance-log" Oct 06 12:03:38 crc kubenswrapper[4698]: E1006 12:03:38.295766 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7163b58-d460-4874-ab7d-826c9058165f" containerName="glance-httpd" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.295772 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7163b58-d460-4874-ab7d-826c9058165f" containerName="glance-httpd" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.296066 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7163b58-d460-4874-ab7d-826c9058165f" containerName="glance-httpd" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.296086 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7163b58-d460-4874-ab7d-826c9058165f" containerName="glance-log" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.299958 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.308772 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.327040 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.327257 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.430379 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.430450 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.430511 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.430613 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.430648 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.430714 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.430767 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.430804 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r6s5\" (UniqueName: \"kubernetes.io/projected/29816275-45db-4e16-bdbc-c3a6a2f67a7e-kube-api-access-2r6s5\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.545149 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.545197 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.545233 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.545296 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.545317 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.545344 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.545381 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.545412 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6s5\" (UniqueName: \"kubernetes.io/projected/29816275-45db-4e16-bdbc-c3a6a2f67a7e-kube-api-access-2r6s5\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.545998 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.555491 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.555586 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.568117 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.572567 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.573360 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.599484 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.629899 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6s5\" (UniqueName: \"kubernetes.io/projected/29816275-45db-4e16-bdbc-c3a6a2f67a7e-kube-api-access-2r6s5\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.701999 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.795183 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.795368 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:03:38 crc kubenswrapper[4698]: I1006 12:03:38.980685 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.121671 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" event={"ID":"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1","Type":"ContainerStarted","Data":"3bd95566480f3aed7a379dc013d8548d6081def95e72e8e8a931980de82ca39b"} Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.122805 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.146202 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" podStartSLOduration=4.146172756 podStartE2EDuration="4.146172756s" podCreationTimestamp="2025-10-06 12:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:39.144602037 +0000 UTC m=+1106.557294220" watchObservedRunningTime="2025-10-06 12:03:39.146172756 +0000 UTC m=+1106.558864929" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.434473 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7163b58-d460-4874-ab7d-826c9058165f" path="/var/lib/kubelet/pods/e7163b58-d460-4874-ab7d-826c9058165f/volumes" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.790589 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c87999589-tj5hk"] Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.801885 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.805543 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.805641 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.835718 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c87999589-tj5hk"] Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.860594 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-public-tls-certs\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.860652 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-combined-ca-bundle\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.860698 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-config\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.860716 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-httpd-config\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.860862 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6tnp\" (UniqueName: \"kubernetes.io/projected/6d4d2004-223b-4b0e-9b88-229437567c01-kube-api-access-m6tnp\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.860898 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-ovndb-tls-certs\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.860916 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-internal-tls-certs\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.864308 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7pkpz" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.940282 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.962891 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-scripts\") pod \"828a84bc-95cc-448b-a84c-4ca894dd754b\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.962999 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/828a84bc-95cc-448b-a84c-4ca894dd754b-logs\") pod \"828a84bc-95cc-448b-a84c-4ca894dd754b\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.963063 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlh78\" (UniqueName: \"kubernetes.io/projected/828a84bc-95cc-448b-a84c-4ca894dd754b-kube-api-access-vlh78\") pod \"828a84bc-95cc-448b-a84c-4ca894dd754b\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.963226 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-combined-ca-bundle\") pod \"828a84bc-95cc-448b-a84c-4ca894dd754b\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.963360 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-config-data\") pod \"828a84bc-95cc-448b-a84c-4ca894dd754b\" (UID: \"828a84bc-95cc-448b-a84c-4ca894dd754b\") " Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.964700 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-public-tls-certs\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.964745 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-combined-ca-bundle\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.964783 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-config\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.964800 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-httpd-config\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.964878 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6tnp\" (UniqueName: \"kubernetes.io/projected/6d4d2004-223b-4b0e-9b88-229437567c01-kube-api-access-m6tnp\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.964916 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-ovndb-tls-certs\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.964942 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-internal-tls-certs\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.965139 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/828a84bc-95cc-448b-a84c-4ca894dd754b-logs" (OuterVolumeSpecName: "logs") pod "828a84bc-95cc-448b-a84c-4ca894dd754b" (UID: "828a84bc-95cc-448b-a84c-4ca894dd754b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.974798 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-config\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.980142 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-httpd-config\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.984466 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828a84bc-95cc-448b-a84c-4ca894dd754b-kube-api-access-vlh78" (OuterVolumeSpecName: "kube-api-access-vlh78") pod "828a84bc-95cc-448b-a84c-4ca894dd754b" (UID: "828a84bc-95cc-448b-a84c-4ca894dd754b"). InnerVolumeSpecName "kube-api-access-vlh78". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:39 crc kubenswrapper[4698]: I1006 12:03:39.990493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6tnp\" (UniqueName: \"kubernetes.io/projected/6d4d2004-223b-4b0e-9b88-229437567c01-kube-api-access-m6tnp\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:39.999067 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-public-tls-certs\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.000425 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-internal-tls-certs\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.012851 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-combined-ca-bundle\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.015345 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-scripts" (OuterVolumeSpecName: "scripts") pod "828a84bc-95cc-448b-a84c-4ca894dd754b" (UID: "828a84bc-95cc-448b-a84c-4ca894dd754b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.024162 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d4d2004-223b-4b0e-9b88-229437567c01-ovndb-tls-certs\") pod \"neutron-c87999589-tj5hk\" (UID: \"6d4d2004-223b-4b0e-9b88-229437567c01\") " pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.028695 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-config-data" (OuterVolumeSpecName: "config-data") pod "828a84bc-95cc-448b-a84c-4ca894dd754b" (UID: "828a84bc-95cc-448b-a84c-4ca894dd754b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.037290 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "828a84bc-95cc-448b-a84c-4ca894dd754b" (UID: "828a84bc-95cc-448b-a84c-4ca894dd754b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.067180 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.067212 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.067221 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/828a84bc-95cc-448b-a84c-4ca894dd754b-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.067231 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlh78\" (UniqueName: \"kubernetes.io/projected/828a84bc-95cc-448b-a84c-4ca894dd754b-kube-api-access-vlh78\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.067244 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828a84bc-95cc-448b-a84c-4ca894dd754b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.149669 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7pkpz" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.150147 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7pkpz" event={"ID":"828a84bc-95cc-448b-a84c-4ca894dd754b","Type":"ContainerDied","Data":"445539ad5c9f9c411cb24bdafcb15af21d36b5dc26c6cf2aeb4de2235b997816"} Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.150263 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="445539ad5c9f9c411cb24bdafcb15af21d36b5dc26c6cf2aeb4de2235b997816" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.159432 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"29816275-45db-4e16-bdbc-c3a6a2f67a7e","Type":"ContainerStarted","Data":"974db308e22dc26a356ba6e3a7824df5009e6d2ca1863889cd109c391af5e9ff"} Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.173797 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.182569 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.325791 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-667d6544d-8ddpx"] Oct 06 12:03:40 crc kubenswrapper[4698]: E1006 12:03:40.326236 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828a84bc-95cc-448b-a84c-4ca894dd754b" containerName="placement-db-sync" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.326254 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="828a84bc-95cc-448b-a84c-4ca894dd754b" containerName="placement-db-sync" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.326443 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="828a84bc-95cc-448b-a84c-4ca894dd754b" containerName="placement-db-sync" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.327716 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.332804 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.332841 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.333043 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.333089 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jxgqh" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.342075 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.345619 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-667d6544d-8ddpx"] Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.384690 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-public-tls-certs\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.384764 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-config-data\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.385610 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-scripts\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.385812 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306a4319-6233-4455-85ac-b0c422603faf-logs\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.387390 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-combined-ca-bundle\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.387616 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-internal-tls-certs\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.387672 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtg4m\" (UniqueName: \"kubernetes.io/projected/306a4319-6233-4455-85ac-b0c422603faf-kube-api-access-rtg4m\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.489225 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-scripts\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.489307 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306a4319-6233-4455-85ac-b0c422603faf-logs\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.489332 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-combined-ca-bundle\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.489402 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-internal-tls-certs\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.489428 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtg4m\" (UniqueName: \"kubernetes.io/projected/306a4319-6233-4455-85ac-b0c422603faf-kube-api-access-rtg4m\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.489450 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-public-tls-certs\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.489467 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-config-data\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.495809 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306a4319-6233-4455-85ac-b0c422603faf-logs\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.497196 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-public-tls-certs\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.497552 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-config-data\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.498120 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-combined-ca-bundle\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.498733 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-scripts\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.502683 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/306a4319-6233-4455-85ac-b0c422603faf-internal-tls-certs\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.513576 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtg4m\" (UniqueName: \"kubernetes.io/projected/306a4319-6233-4455-85ac-b0c422603faf-kube-api-access-rtg4m\") pod \"placement-667d6544d-8ddpx\" (UID: \"306a4319-6233-4455-85ac-b0c422603faf\") " pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:40 crc kubenswrapper[4698]: I1006 12:03:40.650515 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.019035 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-667d6544d-8ddpx"] Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.188330 4698 generic.go:334] "Generic (PLEG): container finished" podID="1d7c054b-7ade-402e-b389-d07ced69c957" containerID="e15ede615d1039c28da6e28604566807ffb0d03e5e8a810496e36f004b855fa7" exitCode=0 Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.188412 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t47dq" event={"ID":"1d7c054b-7ade-402e-b389-d07ced69c957","Type":"ContainerDied","Data":"e15ede615d1039c28da6e28604566807ffb0d03e5e8a810496e36f004b855fa7"} Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.190384 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"29816275-45db-4e16-bdbc-c3a6a2f67a7e","Type":"ContainerStarted","Data":"6e2ce57338624151c8d74239a24c93aa48c6cb9252334928d07453d6b3f9a102"} Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.266872 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c87999589-tj5hk"] Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.433947 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.434430 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.589327 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.589401 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-849d766464-jl8th" Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.887852 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Oct 06 12:03:42 crc kubenswrapper[4698]: I1006 12:03:42.942138 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Oct 06 12:03:43 crc kubenswrapper[4698]: I1006 12:03:43.231890 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:43 crc kubenswrapper[4698]: I1006 12:03:43.266348 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Oct 06 12:03:43 crc kubenswrapper[4698]: I1006 12:03:43.272212 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:43 crc kubenswrapper[4698]: I1006 12:03:43.291692 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4698]: I1006 12:03:43.291957 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4698]: I1006 12:03:43.363203 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4698]: I1006 12:03:43.387747 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 12:03:43 crc kubenswrapper[4698]: I1006 12:03:43.795199 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Oct 06 12:03:43 crc kubenswrapper[4698]: I1006 12:03:43.802362 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Oct 06 12:03:43 crc kubenswrapper[4698]: W1006 12:03:43.952273 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod306a4319_6233_4455_85ac_b0c422603faf.slice/crio-013ef760686ab27d1b6d48f1e5b258f9635b7c4c3dcbe5ac6c324b9545577295 WatchSource:0}: Error finding container 013ef760686ab27d1b6d48f1e5b258f9635b7c4c3dcbe5ac6c324b9545577295: Status 404 returned error can't find the container with id 013ef760686ab27d1b6d48f1e5b258f9635b7c4c3dcbe5ac6c324b9545577295 Oct 06 12:03:43 crc kubenswrapper[4698]: W1006 12:03:43.954736 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d4d2004_223b_4b0e_9b88_229437567c01.slice/crio-2bae1ffc69ac8273bfa59a97f7a7839f21a038f30d7fe313d272df881d30b329 WatchSource:0}: Error finding container 2bae1ffc69ac8273bfa59a97f7a7839f21a038f30d7fe313d272df881d30b329: Status 404 returned error can't find the container with id 2bae1ffc69ac8273bfa59a97f7a7839f21a038f30d7fe313d272df881d30b329 Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.062082 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.131770 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-combined-ca-bundle\") pod \"1d7c054b-7ade-402e-b389-d07ced69c957\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.131910 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-credential-keys\") pod \"1d7c054b-7ade-402e-b389-d07ced69c957\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.132000 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcz7w\" (UniqueName: \"kubernetes.io/projected/1d7c054b-7ade-402e-b389-d07ced69c957-kube-api-access-pcz7w\") pod \"1d7c054b-7ade-402e-b389-d07ced69c957\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.132071 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-fernet-keys\") pod \"1d7c054b-7ade-402e-b389-d07ced69c957\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.132281 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-scripts\") pod \"1d7c054b-7ade-402e-b389-d07ced69c957\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.132323 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-config-data\") pod \"1d7c054b-7ade-402e-b389-d07ced69c957\" (UID: \"1d7c054b-7ade-402e-b389-d07ced69c957\") " Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.153291 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1d7c054b-7ade-402e-b389-d07ced69c957" (UID: "1d7c054b-7ade-402e-b389-d07ced69c957"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.153418 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1d7c054b-7ade-402e-b389-d07ced69c957" (UID: "1d7c054b-7ade-402e-b389-d07ced69c957"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.153534 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7c054b-7ade-402e-b389-d07ced69c957-kube-api-access-pcz7w" (OuterVolumeSpecName: "kube-api-access-pcz7w") pod "1d7c054b-7ade-402e-b389-d07ced69c957" (UID: "1d7c054b-7ade-402e-b389-d07ced69c957"). InnerVolumeSpecName "kube-api-access-pcz7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.153586 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-scripts" (OuterVolumeSpecName: "scripts") pod "1d7c054b-7ade-402e-b389-d07ced69c957" (UID: "1d7c054b-7ade-402e-b389-d07ced69c957"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.176206 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d7c054b-7ade-402e-b389-d07ced69c957" (UID: "1d7c054b-7ade-402e-b389-d07ced69c957"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.176604 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-config-data" (OuterVolumeSpecName: "config-data") pod "1d7c054b-7ade-402e-b389-d07ced69c957" (UID: "1d7c054b-7ade-402e-b389-d07ced69c957"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.216298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c87999589-tj5hk" event={"ID":"6d4d2004-223b-4b0e-9b88-229437567c01","Type":"ContainerStarted","Data":"2bae1ffc69ac8273bfa59a97f7a7839f21a038f30d7fe313d272df881d30b329"} Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.223300 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-667d6544d-8ddpx" event={"ID":"306a4319-6233-4455-85ac-b0c422603faf","Type":"ContainerStarted","Data":"013ef760686ab27d1b6d48f1e5b258f9635b7c4c3dcbe5ac6c324b9545577295"} Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.227085 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t47dq" event={"ID":"1d7c054b-7ade-402e-b389-d07ced69c957","Type":"ContainerDied","Data":"2da7a049924c78268fe3122b9ba5538ab4861a9a29515ae45354656acd34ae36"} Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.227121 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da7a049924c78268fe3122b9ba5538ab4861a9a29515ae45354656acd34ae36" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.227289 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t47dq" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.229520 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.229580 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.229593 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.237315 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.238258 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.238285 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.238295 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.238307 4698 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.238344 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcz7w\" (UniqueName: \"kubernetes.io/projected/1d7c054b-7ade-402e-b389-d07ced69c957-kube-api-access-pcz7w\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.238357 4698 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d7c054b-7ade-402e-b389-d07ced69c957-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.309467 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.374069 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-658b97bb55-lp7jm"] Oct 06 12:03:44 crc kubenswrapper[4698]: E1006 12:03:44.374690 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7c054b-7ade-402e-b389-d07ced69c957" containerName="keystone-bootstrap" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.375103 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7c054b-7ade-402e-b389-d07ced69c957" containerName="keystone-bootstrap" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.375325 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7c054b-7ade-402e-b389-d07ced69c957" containerName="keystone-bootstrap" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.376189 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.400353 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.400750 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.403053 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.405927 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.407877 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dw6fp" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.408288 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.458389 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-fernet-keys\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.473690 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-config-data\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.473986 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-credential-keys\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.474170 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-internal-tls-certs\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.474240 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghlrq\" (UniqueName: \"kubernetes.io/projected/59515f7e-0c54-4044-8b9a-45f3aebb9870-kube-api-access-ghlrq\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.474332 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-combined-ca-bundle\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.474396 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-scripts\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.474426 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-public-tls-certs\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.528177 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-658b97bb55-lp7jm"] Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.577549 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-credential-keys\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.577639 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-internal-tls-certs\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.577678 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghlrq\" (UniqueName: \"kubernetes.io/projected/59515f7e-0c54-4044-8b9a-45f3aebb9870-kube-api-access-ghlrq\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.577710 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-combined-ca-bundle\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.577744 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-scripts\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.577767 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-public-tls-certs\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.577836 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-fernet-keys\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.577864 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-config-data\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.613171 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-combined-ca-bundle\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.619301 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-credential-keys\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.628158 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-config-data\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.631153 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-fernet-keys\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.633366 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-internal-tls-certs\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.647987 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-public-tls-certs\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.651883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghlrq\" (UniqueName: \"kubernetes.io/projected/59515f7e-0c54-4044-8b9a-45f3aebb9870-kube-api-access-ghlrq\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.669589 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59515f7e-0c54-4044-8b9a-45f3aebb9870-scripts\") pod \"keystone-658b97bb55-lp7jm\" (UID: \"59515f7e-0c54-4044-8b9a-45f3aebb9870\") " pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:44 crc kubenswrapper[4698]: I1006 12:03:44.750641 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:45 crc kubenswrapper[4698]: I1006 12:03:45.914321 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:03:46 crc kubenswrapper[4698]: I1006 12:03:46.007310 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rgglq"] Oct 06 12:03:46 crc kubenswrapper[4698]: I1006 12:03:46.014965 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" podUID="f85e7a86-219c-4d1e-922c-8d8f4fec787d" containerName="dnsmasq-dns" containerID="cri-o://876d99631ea0d4c187701f625a6c21323a1e72e51a0133b234876db86f1652f6" gracePeriod=10 Oct 06 12:03:46 crc kubenswrapper[4698]: I1006 12:03:46.268701 4698 generic.go:334] "Generic (PLEG): container finished" podID="f85e7a86-219c-4d1e-922c-8d8f4fec787d" containerID="876d99631ea0d4c187701f625a6c21323a1e72e51a0133b234876db86f1652f6" exitCode=0 Oct 06 12:03:46 crc kubenswrapper[4698]: I1006 12:03:46.269195 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" event={"ID":"f85e7a86-219c-4d1e-922c-8d8f4fec787d","Type":"ContainerDied","Data":"876d99631ea0d4c187701f625a6c21323a1e72e51a0133b234876db86f1652f6"} Oct 06 12:03:46 crc kubenswrapper[4698]: I1006 12:03:46.727994 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" podUID="f85e7a86-219c-4d1e-922c-8d8f4fec787d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Oct 06 12:03:47 crc kubenswrapper[4698]: I1006 12:03:47.240665 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 12:03:47 crc kubenswrapper[4698]: I1006 12:03:47.241304 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:03:47 crc kubenswrapper[4698]: I1006 12:03:47.362131 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 12:03:47 crc kubenswrapper[4698]: I1006 12:03:47.870535 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:47 crc kubenswrapper[4698]: I1006 12:03:47.871388 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="9559acba-7cb8-4602-b32d-51385773c9db" containerName="watcher-api-log" containerID="cri-o://d7a3f50f3fd2a604dc2498eb746a4943cd7e8dc5de53a1027cdfab593b607d8d" gracePeriod=30 Oct 06 12:03:47 crc kubenswrapper[4698]: I1006 12:03:47.871466 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="9559acba-7cb8-4602-b32d-51385773c9db" containerName="watcher-api" containerID="cri-o://b03bec7e62c24f1dc68bc562e78ff33bbf7bf998ca28f24a468d2bfb2c93d36d" gracePeriod=30 Oct 06 12:03:48 crc kubenswrapper[4698]: I1006 12:03:48.310358 4698 generic.go:334] "Generic (PLEG): container finished" podID="9559acba-7cb8-4602-b32d-51385773c9db" containerID="d7a3f50f3fd2a604dc2498eb746a4943cd7e8dc5de53a1027cdfab593b607d8d" exitCode=143 Oct 06 12:03:48 crc kubenswrapper[4698]: I1006 12:03:48.312058 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9559acba-7cb8-4602-b32d-51385773c9db","Type":"ContainerDied","Data":"d7a3f50f3fd2a604dc2498eb746a4943cd7e8dc5de53a1027cdfab593b607d8d"} Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.126611 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.147705 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx4hx\" (UniqueName: \"kubernetes.io/projected/f85e7a86-219c-4d1e-922c-8d8f4fec787d-kube-api-access-dx4hx\") pod \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.147763 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-swift-storage-0\") pod \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.147819 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-svc\") pod \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.147867 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-nb\") pod \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.148060 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-config\") pod \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.148349 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-sb\") pod \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\" (UID: \"f85e7a86-219c-4d1e-922c-8d8f4fec787d\") " Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.189351 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85e7a86-219c-4d1e-922c-8d8f4fec787d-kube-api-access-dx4hx" (OuterVolumeSpecName: "kube-api-access-dx4hx") pod "f85e7a86-219c-4d1e-922c-8d8f4fec787d" (UID: "f85e7a86-219c-4d1e-922c-8d8f4fec787d"). InnerVolumeSpecName "kube-api-access-dx4hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.245167 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-config" (OuterVolumeSpecName: "config") pod "f85e7a86-219c-4d1e-922c-8d8f4fec787d" (UID: "f85e7a86-219c-4d1e-922c-8d8f4fec787d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.247599 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f85e7a86-219c-4d1e-922c-8d8f4fec787d" (UID: "f85e7a86-219c-4d1e-922c-8d8f4fec787d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.252039 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx4hx\" (UniqueName: \"kubernetes.io/projected/f85e7a86-219c-4d1e-922c-8d8f4fec787d-kube-api-access-dx4hx\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.252078 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.252088 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.259793 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f85e7a86-219c-4d1e-922c-8d8f4fec787d" (UID: "f85e7a86-219c-4d1e-922c-8d8f4fec787d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.300855 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f85e7a86-219c-4d1e-922c-8d8f4fec787d" (UID: "f85e7a86-219c-4d1e-922c-8d8f4fec787d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.355380 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.355420 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.391182 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f85e7a86-219c-4d1e-922c-8d8f4fec787d" (UID: "f85e7a86-219c-4d1e-922c-8d8f4fec787d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.457955 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f85e7a86-219c-4d1e-922c-8d8f4fec787d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.464678 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-658b97bb55-lp7jm"] Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.465632 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c87999589-tj5hk" event={"ID":"6d4d2004-223b-4b0e-9b88-229437567c01","Type":"ContainerStarted","Data":"f8e0dcff6fdd5c7dd8a389325878937076b3a976979ed8244bf35f537b1648c8"} Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.469217 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nc7nk" event={"ID":"df1bd773-04e5-4524-a48e-b7a65c983a89","Type":"ContainerStarted","Data":"7a2140df0254cf85694203519aa0a667a574788f23604673caefc8f99e920186"} Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.509924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" event={"ID":"f85e7a86-219c-4d1e-922c-8d8f4fec787d","Type":"ContainerDied","Data":"673c0d1a1939dd3a51a3e3260bc825854e1dead5e3529b7c68a38dcc2c093c3d"} Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.509990 4698 scope.go:117] "RemoveContainer" containerID="876d99631ea0d4c187701f625a6c21323a1e72e51a0133b234876db86f1652f6" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.517783 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-nc7nk" podStartSLOduration=3.928043443 podStartE2EDuration="51.517753661s" podCreationTimestamp="2025-10-06 12:02:58 +0000 UTC" firstStartedPulling="2025-10-06 12:03:00.994717056 +0000 UTC m=+1068.407409229" lastFinishedPulling="2025-10-06 12:03:48.584427274 +0000 UTC m=+1115.997119447" observedRunningTime="2025-10-06 12:03:49.507659734 +0000 UTC m=+1116.920351907" watchObservedRunningTime="2025-10-06 12:03:49.517753661 +0000 UTC m=+1116.930445834" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.521724 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rgglq" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.552247 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-667d6544d-8ddpx" event={"ID":"306a4319-6233-4455-85ac-b0c422603faf","Type":"ContainerStarted","Data":"916f9c24627d4917f45116faf37878f77aa0eef2be7bf1007283ad68a28978b4"} Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.568381 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7fb1575-bbc3-4d9f-a0ce-31652f935cac","Type":"ContainerStarted","Data":"c3a0794c7abdfb84e425413038c1ad0e404152b53fbacea6e014ba48e15726fe"} Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.686827 4698 scope.go:117] "RemoveContainer" containerID="a440125a4ee36863e59a145fd83617499560f4c6674d2bac782b892ea31d323e" Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.714107 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rgglq"] Oct 06 12:03:49 crc kubenswrapper[4698]: I1006 12:03:49.724462 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rgglq"] Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.601710 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-667d6544d-8ddpx" event={"ID":"306a4319-6233-4455-85ac-b0c422603faf","Type":"ContainerStarted","Data":"1cb1907b7fc5095d9ce567d02ff1e732d1537a1925db77e1fe8e5373c1ee03fe"} Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.604187 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.604523 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.606146 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-658b97bb55-lp7jm" event={"ID":"59515f7e-0c54-4044-8b9a-45f3aebb9870","Type":"ContainerStarted","Data":"4e2859167c40715bb4f6172fbf6b31d2fd9ed475f84f7334d5a2d5bd7948688f"} Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.606267 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-658b97bb55-lp7jm" event={"ID":"59515f7e-0c54-4044-8b9a-45f3aebb9870","Type":"ContainerStarted","Data":"0b6baf58442cd0036c5159ac9549d759396c8dda77666eaf270fd40106834612"} Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.606362 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.609738 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"29816275-45db-4e16-bdbc-c3a6a2f67a7e","Type":"ContainerStarted","Data":"173b48a8cad7a4c51e3bb8a8d166b5b86cb450c616657a5b4be1699c5283190f"} Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.613077 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c87999589-tj5hk" event={"ID":"6d4d2004-223b-4b0e-9b88-229437567c01","Type":"ContainerStarted","Data":"6cd3f0d4887f7a31dfed432d2158d326399b78a984dae62596e57672440c0d16"} Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.613329 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.690797 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-667d6544d-8ddpx" podStartSLOduration=10.690764005 podStartE2EDuration="10.690764005s" podCreationTimestamp="2025-10-06 12:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:50.651613089 +0000 UTC m=+1118.064305302" watchObservedRunningTime="2025-10-06 12:03:50.690764005 +0000 UTC m=+1118.103456168" Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.692162 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c87999589-tj5hk" podStartSLOduration=11.692155799 podStartE2EDuration="11.692155799s" podCreationTimestamp="2025-10-06 12:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:50.676462666 +0000 UTC m=+1118.089154839" watchObservedRunningTime="2025-10-06 12:03:50.692155799 +0000 UTC m=+1118.104847972" Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.728201 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-658b97bb55-lp7jm" podStartSLOduration=6.72816373 podStartE2EDuration="6.72816373s" podCreationTimestamp="2025-10-06 12:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:50.716284999 +0000 UTC m=+1118.128977182" watchObservedRunningTime="2025-10-06 12:03:50.72816373 +0000 UTC m=+1118.140855903" Oct 06 12:03:50 crc kubenswrapper[4698]: I1006 12:03:50.750914 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.750893225 podStartE2EDuration="12.750893225s" podCreationTimestamp="2025-10-06 12:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:50.737626081 +0000 UTC m=+1118.150318254" watchObservedRunningTime="2025-10-06 12:03:50.750893225 +0000 UTC m=+1118.163585398" Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.324419 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9559acba-7cb8-4602-b32d-51385773c9db" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.165:9322/\": read tcp 10.217.0.2:36294->10.217.0.165:9322: read: connection reset by peer" Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.324853 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9559acba-7cb8-4602-b32d-51385773c9db" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9322/\": read tcp 10.217.0.2:36278->10.217.0.165:9322: read: connection reset by peer" Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.352450 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e7a86-219c-4d1e-922c-8d8f4fec787d" path="/var/lib/kubelet/pods/f85e7a86-219c-4d1e-922c-8d8f4fec787d/volumes" Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.630814 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l9vdh" event={"ID":"d620584e-f9cd-432a-9f55-9aa1f1056766","Type":"ContainerStarted","Data":"58e4c201ff0557e866925c0ebc2dd3a242f3286eced63b5617823b5bd0cceaf6"} Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.641105 4698 generic.go:334] "Generic (PLEG): container finished" podID="9559acba-7cb8-4602-b32d-51385773c9db" containerID="b03bec7e62c24f1dc68bc562e78ff33bbf7bf998ca28f24a468d2bfb2c93d36d" exitCode=0 Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.642166 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9559acba-7cb8-4602-b32d-51385773c9db","Type":"ContainerDied","Data":"b03bec7e62c24f1dc68bc562e78ff33bbf7bf998ca28f24a468d2bfb2c93d36d"} Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.655364 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-l9vdh" podStartSLOduration=3.926965456 podStartE2EDuration="53.655322136s" podCreationTimestamp="2025-10-06 12:02:58 +0000 UTC" firstStartedPulling="2025-10-06 12:03:00.96260379 +0000 UTC m=+1068.375295963" lastFinishedPulling="2025-10-06 12:03:50.69096047 +0000 UTC m=+1118.103652643" observedRunningTime="2025-10-06 12:03:51.650386525 +0000 UTC m=+1119.063078708" watchObservedRunningTime="2025-10-06 12:03:51.655322136 +0000 UTC m=+1119.068014309" Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.831471 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.923915 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-config-data\") pod \"9559acba-7cb8-4602-b32d-51385773c9db\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.923983 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-custom-prometheus-ca\") pod \"9559acba-7cb8-4602-b32d-51385773c9db\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.925597 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bb2r\" (UniqueName: \"kubernetes.io/projected/9559acba-7cb8-4602-b32d-51385773c9db-kube-api-access-9bb2r\") pod \"9559acba-7cb8-4602-b32d-51385773c9db\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.925649 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-combined-ca-bundle\") pod \"9559acba-7cb8-4602-b32d-51385773c9db\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.925907 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9559acba-7cb8-4602-b32d-51385773c9db-logs\") pod \"9559acba-7cb8-4602-b32d-51385773c9db\" (UID: \"9559acba-7cb8-4602-b32d-51385773c9db\") " Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.927946 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9559acba-7cb8-4602-b32d-51385773c9db-logs" (OuterVolumeSpecName: "logs") pod "9559acba-7cb8-4602-b32d-51385773c9db" (UID: "9559acba-7cb8-4602-b32d-51385773c9db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.952212 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9559acba-7cb8-4602-b32d-51385773c9db-kube-api-access-9bb2r" (OuterVolumeSpecName: "kube-api-access-9bb2r") pod "9559acba-7cb8-4602-b32d-51385773c9db" (UID: "9559acba-7cb8-4602-b32d-51385773c9db"). InnerVolumeSpecName "kube-api-access-9bb2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.982581 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9559acba-7cb8-4602-b32d-51385773c9db" (UID: "9559acba-7cb8-4602-b32d-51385773c9db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:51 crc kubenswrapper[4698]: I1006 12:03:51.989466 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9559acba-7cb8-4602-b32d-51385773c9db" (UID: "9559acba-7cb8-4602-b32d-51385773c9db"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.029377 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9559acba-7cb8-4602-b32d-51385773c9db-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.029416 4698 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.029432 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bb2r\" (UniqueName: \"kubernetes.io/projected/9559acba-7cb8-4602-b32d-51385773c9db-kube-api-access-9bb2r\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.029444 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.034292 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-config-data" (OuterVolumeSpecName: "config-data") pod "9559acba-7cb8-4602-b32d-51385773c9db" (UID: "9559acba-7cb8-4602-b32d-51385773c9db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.131807 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9559acba-7cb8-4602-b32d-51385773c9db-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.436027 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-654cf8498d-s5tdp" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.604240 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-849d766464-jl8th" podUID="2b4da0ff-f7c0-47d2-b204-69c0da4ab453" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.655921 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.657599 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9559acba-7cb8-4602-b32d-51385773c9db","Type":"ContainerDied","Data":"b154b74aadafbd5d7f62e969531ff64ade2ce7f5a417c6fc08c443eeabb520dc"} Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.657644 4698 scope.go:117] "RemoveContainer" containerID="b03bec7e62c24f1dc68bc562e78ff33bbf7bf998ca28f24a468d2bfb2c93d36d" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.712174 4698 scope.go:117] "RemoveContainer" containerID="d7a3f50f3fd2a604dc2498eb746a4943cd7e8dc5de53a1027cdfab593b607d8d" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.729122 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.737059 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.746214 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:52 crc kubenswrapper[4698]: E1006 12:03:52.746826 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9559acba-7cb8-4602-b32d-51385773c9db" containerName="watcher-api" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.746848 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9559acba-7cb8-4602-b32d-51385773c9db" containerName="watcher-api" Oct 06 12:03:52 crc kubenswrapper[4698]: E1006 12:03:52.746895 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e7a86-219c-4d1e-922c-8d8f4fec787d" containerName="dnsmasq-dns" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.746903 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e7a86-219c-4d1e-922c-8d8f4fec787d" containerName="dnsmasq-dns" Oct 06 12:03:52 crc kubenswrapper[4698]: E1006 12:03:52.746916 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e7a86-219c-4d1e-922c-8d8f4fec787d" containerName="init" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.746923 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e7a86-219c-4d1e-922c-8d8f4fec787d" containerName="init" Oct 06 12:03:52 crc kubenswrapper[4698]: E1006 12:03:52.746937 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9559acba-7cb8-4602-b32d-51385773c9db" containerName="watcher-api-log" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.746943 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9559acba-7cb8-4602-b32d-51385773c9db" containerName="watcher-api-log" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.747161 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e7a86-219c-4d1e-922c-8d8f4fec787d" containerName="dnsmasq-dns" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.747176 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9559acba-7cb8-4602-b32d-51385773c9db" containerName="watcher-api" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.747187 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9559acba-7cb8-4602-b32d-51385773c9db" containerName="watcher-api-log" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.752102 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.755472 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.758539 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.758628 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.765674 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.847249 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.847351 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.847411 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4tf7\" (UniqueName: \"kubernetes.io/projected/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-kube-api-access-q4tf7\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.847445 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-config-data\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.847517 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.847548 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-logs\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.847562 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-public-tls-certs\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.949810 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4tf7\" (UniqueName: \"kubernetes.io/projected/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-kube-api-access-q4tf7\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.949933 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-config-data\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.950003 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.950068 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-logs\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.950088 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-public-tls-certs\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.950121 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.950212 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.951488 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-logs\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.960636 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.960832 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.968422 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-public-tls-certs\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.970997 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-config-data\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.974087 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4tf7\" (UniqueName: \"kubernetes.io/projected/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-kube-api-access-q4tf7\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:52 crc kubenswrapper[4698]: I1006 12:03:52.974969 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9514ee-65e0-4349-af35-8b7a65cf6bb9-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"4c9514ee-65e0-4349-af35-8b7a65cf6bb9\") " pod="openstack/watcher-api-0" Oct 06 12:03:53 crc kubenswrapper[4698]: I1006 12:03:53.075243 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Oct 06 12:03:53 crc kubenswrapper[4698]: I1006 12:03:53.347877 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9559acba-7cb8-4602-b32d-51385773c9db" path="/var/lib/kubelet/pods/9559acba-7cb8-4602-b32d-51385773c9db/volumes" Oct 06 12:03:53 crc kubenswrapper[4698]: I1006 12:03:53.585678 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Oct 06 12:03:53 crc kubenswrapper[4698]: W1006 12:03:53.619651 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c9514ee_65e0_4349_af35_8b7a65cf6bb9.slice/crio-70e66e36760d56fe681d3fecee71416e407793f639a52961f9b112522566a545 WatchSource:0}: Error finding container 70e66e36760d56fe681d3fecee71416e407793f639a52961f9b112522566a545: Status 404 returned error can't find the container with id 70e66e36760d56fe681d3fecee71416e407793f639a52961f9b112522566a545 Oct 06 12:03:53 crc kubenswrapper[4698]: I1006 12:03:53.678403 4698 generic.go:334] "Generic (PLEG): container finished" podID="df1bd773-04e5-4524-a48e-b7a65c983a89" containerID="7a2140df0254cf85694203519aa0a667a574788f23604673caefc8f99e920186" exitCode=0 Oct 06 12:03:53 crc kubenswrapper[4698]: I1006 12:03:53.679339 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nc7nk" event={"ID":"df1bd773-04e5-4524-a48e-b7a65c983a89","Type":"ContainerDied","Data":"7a2140df0254cf85694203519aa0a667a574788f23604673caefc8f99e920186"} Oct 06 12:03:53 crc kubenswrapper[4698]: I1006 12:03:53.700479 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4c9514ee-65e0-4349-af35-8b7a65cf6bb9","Type":"ContainerStarted","Data":"70e66e36760d56fe681d3fecee71416e407793f639a52961f9b112522566a545"} Oct 06 12:03:54 crc kubenswrapper[4698]: I1006 12:03:54.717060 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4c9514ee-65e0-4349-af35-8b7a65cf6bb9","Type":"ContainerStarted","Data":"8c060de55a686ee6e728be4a1d81640c225c1916a39f9bb919b997945baac31b"} Oct 06 12:03:54 crc kubenswrapper[4698]: I1006 12:03:54.717771 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4c9514ee-65e0-4349-af35-8b7a65cf6bb9","Type":"ContainerStarted","Data":"7575918a4197dc7e10d017edbec571d9ba90c22efe2dca98df2c7d0690bd5557"} Oct 06 12:03:54 crc kubenswrapper[4698]: I1006 12:03:54.751966 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.751943807 podStartE2EDuration="2.751943807s" podCreationTimestamp="2025-10-06 12:03:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:03:54.742548617 +0000 UTC m=+1122.155240790" watchObservedRunningTime="2025-10-06 12:03:54.751943807 +0000 UTC m=+1122.164635980" Oct 06 12:03:55 crc kubenswrapper[4698]: I1006 12:03:55.235100 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:03:55 crc kubenswrapper[4698]: I1006 12:03:55.235581 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:03:55 crc kubenswrapper[4698]: I1006 12:03:55.235641 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:03:55 crc kubenswrapper[4698]: I1006 12:03:55.236899 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08949ee05d365e895ee66ed6a6e38acc8b8b1f686a7e426a5dbaacabe5cc7044"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:03:55 crc kubenswrapper[4698]: I1006 12:03:55.236977 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://08949ee05d365e895ee66ed6a6e38acc8b8b1f686a7e426a5dbaacabe5cc7044" gracePeriod=600 Oct 06 12:03:55 crc kubenswrapper[4698]: I1006 12:03:55.730782 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="08949ee05d365e895ee66ed6a6e38acc8b8b1f686a7e426a5dbaacabe5cc7044" exitCode=0 Oct 06 12:03:55 crc kubenswrapper[4698]: I1006 12:03:55.731043 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"08949ee05d365e895ee66ed6a6e38acc8b8b1f686a7e426a5dbaacabe5cc7044"} Oct 06 12:03:55 crc kubenswrapper[4698]: I1006 12:03:55.732097 4698 scope.go:117] "RemoveContainer" containerID="96d00a48231f38aebcbe03f0402869c4d8faf731935340087e25c0cea08f5f67" Oct 06 12:03:55 crc kubenswrapper[4698]: I1006 12:03:55.733381 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.765345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nc7nk" event={"ID":"df1bd773-04e5-4524-a48e-b7a65c983a89","Type":"ContainerDied","Data":"70855594300c70b467492f19741d942b1a9b1fb5c6e34895b494cb25f47cae74"} Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.765970 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70855594300c70b467492f19741d942b1a9b1fb5c6e34895b494cb25f47cae74" Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.769095 4698 generic.go:334] "Generic (PLEG): container finished" podID="d620584e-f9cd-432a-9f55-9aa1f1056766" containerID="58e4c201ff0557e866925c0ebc2dd3a242f3286eced63b5617823b5bd0cceaf6" exitCode=0 Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.769149 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l9vdh" event={"ID":"d620584e-f9cd-432a-9f55-9aa1f1056766","Type":"ContainerDied","Data":"58e4c201ff0557e866925c0ebc2dd3a242f3286eced63b5617823b5bd0cceaf6"} Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.865687 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.889845 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.989326 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-db-sync-config-data\") pod \"df1bd773-04e5-4524-a48e-b7a65c983a89\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.990075 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnglf\" (UniqueName: \"kubernetes.io/projected/df1bd773-04e5-4524-a48e-b7a65c983a89-kube-api-access-qnglf\") pod \"df1bd773-04e5-4524-a48e-b7a65c983a89\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.990240 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-combined-ca-bundle\") pod \"df1bd773-04e5-4524-a48e-b7a65c983a89\" (UID: \"df1bd773-04e5-4524-a48e-b7a65c983a89\") " Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.998791 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "df1bd773-04e5-4524-a48e-b7a65c983a89" (UID: "df1bd773-04e5-4524-a48e-b7a65c983a89"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:57 crc kubenswrapper[4698]: I1006 12:03:57.999017 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1bd773-04e5-4524-a48e-b7a65c983a89-kube-api-access-qnglf" (OuterVolumeSpecName: "kube-api-access-qnglf") pod "df1bd773-04e5-4524-a48e-b7a65c983a89" (UID: "df1bd773-04e5-4524-a48e-b7a65c983a89"). InnerVolumeSpecName "kube-api-access-qnglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:03:58 crc kubenswrapper[4698]: I1006 12:03:58.042711 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df1bd773-04e5-4524-a48e-b7a65c983a89" (UID: "df1bd773-04e5-4524-a48e-b7a65c983a89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:03:58 crc kubenswrapper[4698]: I1006 12:03:58.075497 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Oct 06 12:03:58 crc kubenswrapper[4698]: I1006 12:03:58.097884 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:58 crc kubenswrapper[4698]: I1006 12:03:58.097930 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnglf\" (UniqueName: \"kubernetes.io/projected/df1bd773-04e5-4524-a48e-b7a65c983a89-kube-api-access-qnglf\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:58 crc kubenswrapper[4698]: I1006 12:03:58.097947 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1bd773-04e5-4524-a48e-b7a65c983a89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:03:58 crc kubenswrapper[4698]: I1006 12:03:58.820564 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nc7nk" Oct 06 12:03:58 crc kubenswrapper[4698]: I1006 12:03:58.985408 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:58 crc kubenswrapper[4698]: I1006 12:03:58.985872 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.076177 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.077303 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.258279 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-746dff6454-x5fd6"] Oct 06 12:03:59 crc kubenswrapper[4698]: E1006 12:03:59.258663 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1bd773-04e5-4524-a48e-b7a65c983a89" containerName="barbican-db-sync" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.258676 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1bd773-04e5-4524-a48e-b7a65c983a89" containerName="barbican-db-sync" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.258873 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1bd773-04e5-4524-a48e-b7a65c983a89" containerName="barbican-db-sync" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.260435 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.268188 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z89rc" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.268501 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.268684 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.284304 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-746dff6454-x5fd6"] Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.303430 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7lm46"] Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.305761 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.345627 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.345701 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5e62c6-2df3-4629-831b-a2342fef2343-logs\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.345765 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.345800 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjzxs\" (UniqueName: \"kubernetes.io/projected/fc5e62c6-2df3-4629-831b-a2342fef2343-kube-api-access-mjzxs\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.345824 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5e62c6-2df3-4629-831b-a2342fef2343-config-data\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.345851 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5e62c6-2df3-4629-831b-a2342fef2343-combined-ca-bundle\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.345895 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.345916 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4bj\" (UniqueName: \"kubernetes.io/projected/a3e6e129-b177-4448-9537-20ac1d1edeb4-kube-api-access-jr4bj\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.345949 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.345998 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc5e62c6-2df3-4629-831b-a2342fef2343-config-data-custom\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.346040 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-config\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.364193 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-58d8667c5c-dbf84"] Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.390619 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7lm46"] Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.390783 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.395619 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.397378 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58d8667c5c-dbf84"] Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.449128 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.449552 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa92b339-0782-432a-a352-5a0718033683-config-data\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.449590 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc5e62c6-2df3-4629-831b-a2342fef2343-config-data-custom\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450166 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa92b339-0782-432a-a352-5a0718033683-combined-ca-bundle\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450241 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-config\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450290 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa92b339-0782-432a-a352-5a0718033683-logs\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450319 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450373 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5e62c6-2df3-4629-831b-a2342fef2343-logs\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450577 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450652 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjzxs\" (UniqueName: \"kubernetes.io/projected/fc5e62c6-2df3-4629-831b-a2342fef2343-kube-api-access-mjzxs\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450709 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5e62c6-2df3-4629-831b-a2342fef2343-config-data\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450757 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa92b339-0782-432a-a352-5a0718033683-config-data-custom\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450811 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5e62c6-2df3-4629-831b-a2342fef2343-combined-ca-bundle\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.450845 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv4q5\" (UniqueName: \"kubernetes.io/projected/fa92b339-0782-432a-a352-5a0718033683-kube-api-access-mv4q5\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.451103 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.451916 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr4bj\" (UniqueName: \"kubernetes.io/projected/a3e6e129-b177-4448-9537-20ac1d1edeb4-kube-api-access-jr4bj\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.454342 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.455226 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.458296 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5e62c6-2df3-4629-831b-a2342fef2343-logs\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.462874 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-config\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.463463 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.464007 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.510920 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr4bj\" (UniqueName: \"kubernetes.io/projected/a3e6e129-b177-4448-9537-20ac1d1edeb4-kube-api-access-jr4bj\") pod \"dnsmasq-dns-75c8ddd69c-7lm46\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.511342 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5e62c6-2df3-4629-831b-a2342fef2343-config-data\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.524028 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5e62c6-2df3-4629-831b-a2342fef2343-combined-ca-bundle\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.543750 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjzxs\" (UniqueName: \"kubernetes.io/projected/fc5e62c6-2df3-4629-831b-a2342fef2343-kube-api-access-mjzxs\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.550731 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc5e62c6-2df3-4629-831b-a2342fef2343-config-data-custom\") pod \"barbican-keystone-listener-746dff6454-x5fd6\" (UID: \"fc5e62c6-2df3-4629-831b-a2342fef2343\") " pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.561324 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa92b339-0782-432a-a352-5a0718033683-config-data\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.561405 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa92b339-0782-432a-a352-5a0718033683-combined-ca-bundle\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.561470 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa92b339-0782-432a-a352-5a0718033683-logs\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.561618 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa92b339-0782-432a-a352-5a0718033683-config-data-custom\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.561667 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv4q5\" (UniqueName: \"kubernetes.io/projected/fa92b339-0782-432a-a352-5a0718033683-kube-api-access-mv4q5\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.563148 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa92b339-0782-432a-a352-5a0718033683-logs\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.592951 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa92b339-0782-432a-a352-5a0718033683-config-data-custom\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.594219 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa92b339-0782-432a-a352-5a0718033683-config-data\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.598594 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv4q5\" (UniqueName: \"kubernetes.io/projected/fa92b339-0782-432a-a352-5a0718033683-kube-api-access-mv4q5\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.604105 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b578fc998-97xd7"] Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.604942 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa92b339-0782-432a-a352-5a0718033683-combined-ca-bundle\") pod \"barbican-worker-58d8667c5c-dbf84\" (UID: \"fa92b339-0782-432a-a352-5a0718033683\") " pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.609763 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.619992 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.631404 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b578fc998-97xd7"] Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.664245 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.664355 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0844339-78f0-44dc-bef5-dcc9d46ff389-logs\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.664435 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbw8\" (UniqueName: \"kubernetes.io/projected/b0844339-78f0-44dc-bef5-dcc9d46ff389-kube-api-access-scbw8\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.664581 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-combined-ca-bundle\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.665120 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data-custom\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.690725 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.706611 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.735779 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58d8667c5c-dbf84" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.767531 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.767629 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0844339-78f0-44dc-bef5-dcc9d46ff389-logs\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.767699 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scbw8\" (UniqueName: \"kubernetes.io/projected/b0844339-78f0-44dc-bef5-dcc9d46ff389-kube-api-access-scbw8\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.767723 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-combined-ca-bundle\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.767806 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data-custom\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.769057 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0844339-78f0-44dc-bef5-dcc9d46ff389-logs\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.773454 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data-custom\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.775776 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.795561 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scbw8\" (UniqueName: \"kubernetes.io/projected/b0844339-78f0-44dc-bef5-dcc9d46ff389-kube-api-access-scbw8\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.796137 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-combined-ca-bundle\") pod \"barbican-api-6b578fc998-97xd7\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.834723 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.834761 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 12:03:59 crc kubenswrapper[4698]: I1006 12:03:59.998406 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.325715 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.383388 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-scripts\") pod \"d620584e-f9cd-432a-9f55-9aa1f1056766\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.383480 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-combined-ca-bundle\") pod \"d620584e-f9cd-432a-9f55-9aa1f1056766\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.383562 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-config-data\") pod \"d620584e-f9cd-432a-9f55-9aa1f1056766\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.383717 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26kzc\" (UniqueName: \"kubernetes.io/projected/d620584e-f9cd-432a-9f55-9aa1f1056766-kube-api-access-26kzc\") pod \"d620584e-f9cd-432a-9f55-9aa1f1056766\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.383739 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d620584e-f9cd-432a-9f55-9aa1f1056766-etc-machine-id\") pod \"d620584e-f9cd-432a-9f55-9aa1f1056766\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.383839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-db-sync-config-data\") pod \"d620584e-f9cd-432a-9f55-9aa1f1056766\" (UID: \"d620584e-f9cd-432a-9f55-9aa1f1056766\") " Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.387487 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d620584e-f9cd-432a-9f55-9aa1f1056766-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d620584e-f9cd-432a-9f55-9aa1f1056766" (UID: "d620584e-f9cd-432a-9f55-9aa1f1056766"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.392335 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-scripts" (OuterVolumeSpecName: "scripts") pod "d620584e-f9cd-432a-9f55-9aa1f1056766" (UID: "d620584e-f9cd-432a-9f55-9aa1f1056766"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.398307 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d620584e-f9cd-432a-9f55-9aa1f1056766" (UID: "d620584e-f9cd-432a-9f55-9aa1f1056766"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.400116 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d620584e-f9cd-432a-9f55-9aa1f1056766-kube-api-access-26kzc" (OuterVolumeSpecName: "kube-api-access-26kzc") pod "d620584e-f9cd-432a-9f55-9aa1f1056766" (UID: "d620584e-f9cd-432a-9f55-9aa1f1056766"). InnerVolumeSpecName "kube-api-access-26kzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.449695 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d620584e-f9cd-432a-9f55-9aa1f1056766" (UID: "d620584e-f9cd-432a-9f55-9aa1f1056766"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.482138 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-config-data" (OuterVolumeSpecName: "config-data") pod "d620584e-f9cd-432a-9f55-9aa1f1056766" (UID: "d620584e-f9cd-432a-9f55-9aa1f1056766"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.495866 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.495905 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.495918 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.495928 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26kzc\" (UniqueName: \"kubernetes.io/projected/d620584e-f9cd-432a-9f55-9aa1f1056766-kube-api-access-26kzc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.495938 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d620584e-f9cd-432a-9f55-9aa1f1056766-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.495952 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d620584e-f9cd-432a-9f55-9aa1f1056766-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.851029 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l9vdh" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.851264 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l9vdh" event={"ID":"d620584e-f9cd-432a-9f55-9aa1f1056766","Type":"ContainerDied","Data":"70cda1faecc4ed4caa84e533bc9f4b68fcd531a86f40cec86907127b81a0ec37"} Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.851941 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70cda1faecc4ed4caa84e533bc9f4b68fcd531a86f40cec86907127b81a0ec37" Oct 06 12:04:00 crc kubenswrapper[4698]: E1006 12:04:00.904184 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.904336 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b578fc998-97xd7"] Oct 06 12:04:00 crc kubenswrapper[4698]: I1006 12:04:00.993304 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-746dff6454-x5fd6"] Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.188543 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58d8667c5c-dbf84"] Oct 06 12:04:01 crc kubenswrapper[4698]: W1006 12:04:01.192655 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa92b339_0782_432a_a352_5a0718033683.slice/crio-cc9b11c05dd5e6f16e1b65d837f0f6bfda7d4a8ca156dede1fe61b9d86348fc0 WatchSource:0}: Error finding container cc9b11c05dd5e6f16e1b65d837f0f6bfda7d4a8ca156dede1fe61b9d86348fc0: Status 404 returned error can't find the container with id cc9b11c05dd5e6f16e1b65d837f0f6bfda7d4a8ca156dede1fe61b9d86348fc0 Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.313115 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7lm46"] Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.642817 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:01 crc kubenswrapper[4698]: E1006 12:04:01.643841 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d620584e-f9cd-432a-9f55-9aa1f1056766" containerName="cinder-db-sync" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.643862 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d620584e-f9cd-432a-9f55-9aa1f1056766" containerName="cinder-db-sync" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.644160 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d620584e-f9cd-432a-9f55-9aa1f1056766" containerName="cinder-db-sync" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.645460 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.659962 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.660273 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.661488 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.664377 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-r69m6" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.687495 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.740132 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.740194 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-scripts\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.740222 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7b69092-55b3-453f-9a21-c0824d0f6314-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.740294 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.740318 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw5r4\" (UniqueName: \"kubernetes.io/projected/b7b69092-55b3-453f-9a21-c0824d0f6314-kube-api-access-gw5r4\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.740351 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.746944 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7lm46"] Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.761257 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-fm462"] Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.763344 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.771302 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-fm462"] Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.827746 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.834166 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.836865 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.842947 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-config\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.843201 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-scripts\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.843310 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.846298 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7b69092-55b3-453f-9a21-c0824d0f6314-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.843389 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7b69092-55b3-453f-9a21-c0824d0f6314-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.849447 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.849517 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw5r4\" (UniqueName: \"kubernetes.io/projected/b7b69092-55b3-453f-9a21-c0824d0f6314-kube-api-access-gw5r4\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.849633 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.849676 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.849723 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.849787 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-svc\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.849832 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.849885 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb97f\" (UniqueName: \"kubernetes.io/projected/99dbf7f0-0e13-422a-bf7d-4060cc043b06-kube-api-access-rb97f\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.851518 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.900719 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" event={"ID":"fc5e62c6-2df3-4629-831b-a2342fef2343","Type":"ContainerStarted","Data":"3139c8ebe677e20833b2339e29353a0aa6c916b3a7245a98bb42da517c0c9e88"} Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.903204 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" event={"ID":"a3e6e129-b177-4448-9537-20ac1d1edeb4","Type":"ContainerStarted","Data":"d559d87532ba0f8274798fc1c049f34f7d15a9d661baf5c0a159028daecaadfc"} Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.909986 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7fb1575-bbc3-4d9f-a0ce-31652f935cac","Type":"ContainerStarted","Data":"fe9b820ca93aa6ac1f888d3e64f06a0407d1710f729e45c84d7cfa9bb32e9b65"} Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.910381 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="ceilometer-notification-agent" containerID="cri-o://8c1e954d910aac1280114fe56b674d776984fd67a4f198347c001010d95a1dfa" gracePeriod=30 Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.910535 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.911228 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="sg-core" containerID="cri-o://c3a0794c7abdfb84e425413038c1ad0e404152b53fbacea6e014ba48e15726fe" gracePeriod=30 Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.911427 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="proxy-httpd" containerID="cri-o://fe9b820ca93aa6ac1f888d3e64f06a0407d1710f729e45c84d7cfa9bb32e9b65" gracePeriod=30 Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.921782 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.924481 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.935862 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw5r4\" (UniqueName: \"kubernetes.io/projected/b7b69092-55b3-453f-9a21-c0824d0f6314-kube-api-access-gw5r4\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.945135 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58d8667c5c-dbf84" event={"ID":"fa92b339-0782-432a-a352-5a0718033683","Type":"ContainerStarted","Data":"cc9b11c05dd5e6f16e1b65d837f0f6bfda7d4a8ca156dede1fe61b9d86348fc0"} Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.956312 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b578fc998-97xd7" event={"ID":"b0844339-78f0-44dc-bef5-dcc9d46ff389","Type":"ContainerStarted","Data":"b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385"} Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.956399 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b578fc998-97xd7" event={"ID":"b0844339-78f0-44dc-bef5-dcc9d46ff389","Type":"ContainerStarted","Data":"0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e"} Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.956413 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b578fc998-97xd7" event={"ID":"b0844339-78f0-44dc-bef5-dcc9d46ff389","Type":"ContainerStarted","Data":"cb4fe6efe747b09511c893b60e4bdfb2f485a87a1c333082227dbf393c0ae234"} Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.957455 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.957555 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.958316 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.959145 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.959224 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data-custom\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.959301 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63e5296b-96f5-450b-ab6f-1286e50651b9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.959478 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-scripts\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.964831 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.987220 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.987490 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-svc\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.987586 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.987716 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb97f\" (UniqueName: \"kubernetes.io/projected/99dbf7f0-0e13-422a-bf7d-4060cc043b06-kube-api-access-rb97f\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.987830 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnccb\" (UniqueName: \"kubernetes.io/projected/63e5296b-96f5-450b-ab6f-1286e50651b9-kube-api-access-fnccb\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.988000 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63e5296b-96f5-450b-ab6f-1286e50651b9-logs\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.993264 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.993608 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-config\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.989883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.990109 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.961097 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-scripts\") pod \"cinder-scheduler-0\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.967568 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.998586 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.973081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"715e2c926ea733c39c4353502c94d954bc502215a13b1b5dd34c48e59ae896f3"} Oct 06 12:04:01 crc kubenswrapper[4698]: I1006 12:04:01.995862 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-config\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:01.989445 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.005289 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-svc\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.020123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb97f\" (UniqueName: \"kubernetes.io/projected/99dbf7f0-0e13-422a-bf7d-4060cc043b06-kube-api-access-rb97f\") pod \"dnsmasq-dns-5784cf869f-fm462\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.020342 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b578fc998-97xd7" podStartSLOduration=3.020304771 podStartE2EDuration="3.020304771s" podCreationTimestamp="2025-10-06 12:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:01.994585902 +0000 UTC m=+1129.407278075" watchObservedRunningTime="2025-10-06 12:04:02.020304771 +0000 UTC m=+1129.432996944" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.088697 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.101836 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data-custom\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.101955 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63e5296b-96f5-450b-ab6f-1286e50651b9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.101979 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-scripts\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.102293 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnccb\" (UniqueName: \"kubernetes.io/projected/63e5296b-96f5-450b-ab6f-1286e50651b9-kube-api-access-fnccb\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.102684 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63e5296b-96f5-450b-ab6f-1286e50651b9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.109799 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63e5296b-96f5-450b-ab6f-1286e50651b9-logs\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.110726 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63e5296b-96f5-450b-ab6f-1286e50651b9-logs\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.110799 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.119650 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data-custom\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.121739 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.130224 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.145126 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.152452 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-scripts\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.153191 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnccb\" (UniqueName: \"kubernetes.io/projected/63e5296b-96f5-450b-ab6f-1286e50651b9-kube-api-access-fnccb\") pod \"cinder-api-0\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.282113 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.315093 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.438557 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-654cf8498d-s5tdp" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.589853 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-849d766464-jl8th" podUID="2b4da0ff-f7c0-47d2-b204-69c0da4ab453" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.794970 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-fm462"] Oct 06 12:04:02 crc kubenswrapper[4698]: I1006 12:04:02.945552 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.041785 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b7b69092-55b3-453f-9a21-c0824d0f6314","Type":"ContainerStarted","Data":"387b3970319a1121fc7de76da96bbb8bc4ed063c11ff181dfce59286f344a07b"} Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.076707 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.079864 4698 generic.go:334] "Generic (PLEG): container finished" podID="a3e6e129-b177-4448-9537-20ac1d1edeb4" containerID="6dcfd847a77ecc952ac97726763603ce3a770b55dc78d52f4403b485831ccc36" exitCode=0 Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.079929 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" event={"ID":"a3e6e129-b177-4448-9537-20ac1d1edeb4","Type":"ContainerDied","Data":"6dcfd847a77ecc952ac97726763603ce3a770b55dc78d52f4403b485831ccc36"} Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.091091 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-fm462" event={"ID":"99dbf7f0-0e13-422a-bf7d-4060cc043b06","Type":"ContainerStarted","Data":"c776f751128f14d422aa0b0ab8d89e681052e0ae0dd0759b5a21171dac1516ff"} Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.134213 4698 generic.go:334] "Generic (PLEG): container finished" podID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerID="fe9b820ca93aa6ac1f888d3e64f06a0407d1710f729e45c84d7cfa9bb32e9b65" exitCode=0 Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.134265 4698 generic.go:334] "Generic (PLEG): container finished" podID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerID="c3a0794c7abdfb84e425413038c1ad0e404152b53fbacea6e014ba48e15726fe" exitCode=2 Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.135182 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7fb1575-bbc3-4d9f-a0ce-31652f935cac","Type":"ContainerDied","Data":"fe9b820ca93aa6ac1f888d3e64f06a0407d1710f729e45c84d7cfa9bb32e9b65"} Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.135228 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7fb1575-bbc3-4d9f-a0ce-31652f935cac","Type":"ContainerDied","Data":"c3a0794c7abdfb84e425413038c1ad0e404152b53fbacea6e014ba48e15726fe"} Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.256678 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.270298 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.653228 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.714442 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.714860 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.807217 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr4bj\" (UniqueName: \"kubernetes.io/projected/a3e6e129-b177-4448-9537-20ac1d1edeb4-kube-api-access-jr4bj\") pod \"a3e6e129-b177-4448-9537-20ac1d1edeb4\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.807322 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-nb\") pod \"a3e6e129-b177-4448-9537-20ac1d1edeb4\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.807417 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-sb\") pod \"a3e6e129-b177-4448-9537-20ac1d1edeb4\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.807587 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-swift-storage-0\") pod \"a3e6e129-b177-4448-9537-20ac1d1edeb4\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.807645 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-svc\") pod \"a3e6e129-b177-4448-9537-20ac1d1edeb4\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.807672 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-config\") pod \"a3e6e129-b177-4448-9537-20ac1d1edeb4\" (UID: \"a3e6e129-b177-4448-9537-20ac1d1edeb4\") " Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.847965 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e6e129-b177-4448-9537-20ac1d1edeb4-kube-api-access-jr4bj" (OuterVolumeSpecName: "kube-api-access-jr4bj") pod "a3e6e129-b177-4448-9537-20ac1d1edeb4" (UID: "a3e6e129-b177-4448-9537-20ac1d1edeb4"). InnerVolumeSpecName "kube-api-access-jr4bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.858128 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3e6e129-b177-4448-9537-20ac1d1edeb4" (UID: "a3e6e129-b177-4448-9537-20ac1d1edeb4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.861696 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a3e6e129-b177-4448-9537-20ac1d1edeb4" (UID: "a3e6e129-b177-4448-9537-20ac1d1edeb4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.875898 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3e6e129-b177-4448-9537-20ac1d1edeb4" (UID: "a3e6e129-b177-4448-9537-20ac1d1edeb4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.893061 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-config" (OuterVolumeSpecName: "config") pod "a3e6e129-b177-4448-9537-20ac1d1edeb4" (UID: "a3e6e129-b177-4448-9537-20ac1d1edeb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.910704 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.910743 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.910753 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.910762 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr4bj\" (UniqueName: \"kubernetes.io/projected/a3e6e129-b177-4448-9537-20ac1d1edeb4-kube-api-access-jr4bj\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.910772 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:03 crc kubenswrapper[4698]: I1006 12:04:03.934661 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3e6e129-b177-4448-9537-20ac1d1edeb4" (UID: "a3e6e129-b177-4448-9537-20ac1d1edeb4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.015098 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3e6e129-b177-4448-9537-20ac1d1edeb4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.176370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" event={"ID":"a3e6e129-b177-4448-9537-20ac1d1edeb4","Type":"ContainerDied","Data":"d559d87532ba0f8274798fc1c049f34f7d15a9d661baf5c0a159028daecaadfc"} Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.176430 4698 scope.go:117] "RemoveContainer" containerID="6dcfd847a77ecc952ac97726763603ce3a770b55dc78d52f4403b485831ccc36" Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.176588 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-7lm46" Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.188107 4698 generic.go:334] "Generic (PLEG): container finished" podID="99dbf7f0-0e13-422a-bf7d-4060cc043b06" containerID="67dededc0c214ef5cef31776e9b9e6139c6f099a428f6650e8c9da24822ea744" exitCode=0 Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.188160 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-fm462" event={"ID":"99dbf7f0-0e13-422a-bf7d-4060cc043b06","Type":"ContainerDied","Data":"67dededc0c214ef5cef31776e9b9e6139c6f099a428f6650e8c9da24822ea744"} Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.196248 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63e5296b-96f5-450b-ab6f-1286e50651b9","Type":"ContainerStarted","Data":"13a3df10d6de75d09353cf9c32b4f727c7785185ef6b4d7e1dc00db493a3563f"} Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.217816 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.275883 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7lm46"] Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.294958 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7lm46"] Oct 06 12:04:04 crc kubenswrapper[4698]: I1006 12:04:04.298686 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:04 crc kubenswrapper[4698]: E1006 12:04:04.364516 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3e6e129_b177_4448_9537_20ac1d1edeb4.slice/crio-d559d87532ba0f8274798fc1c049f34f7d15a9d661baf5c0a159028daecaadfc\": RecentStats: unable to find data in memory cache]" Oct 06 12:04:05 crc kubenswrapper[4698]: I1006 12:04:05.213692 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63e5296b-96f5-450b-ab6f-1286e50651b9","Type":"ContainerStarted","Data":"56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27"} Oct 06 12:04:05 crc kubenswrapper[4698]: I1006 12:04:05.346201 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3e6e129-b177-4448-9537-20ac1d1edeb4" path="/var/lib/kubelet/pods/a3e6e129-b177-4448-9537-20ac1d1edeb4/volumes" Oct 06 12:04:05 crc kubenswrapper[4698]: I1006 12:04:05.462791 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:05 crc kubenswrapper[4698]: I1006 12:04:05.961597 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.131266 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6857b4f974-dqhrx"] Oct 06 12:04:06 crc kubenswrapper[4698]: E1006 12:04:06.131771 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e6e129-b177-4448-9537-20ac1d1edeb4" containerName="init" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.131792 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e6e129-b177-4448-9537-20ac1d1edeb4" containerName="init" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.132037 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e6e129-b177-4448-9537-20ac1d1edeb4" containerName="init" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.133436 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.140384 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.141618 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.166434 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6857b4f974-dqhrx"] Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.294148 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b610ef-3459-4cf9-9328-d1f95d01be7a-logs\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.294614 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm2bp\" (UniqueName: \"kubernetes.io/projected/61b610ef-3459-4cf9-9328-d1f95d01be7a-kube-api-access-lm2bp\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.294663 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-public-tls-certs\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.294794 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-config-data\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.294812 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-config-data-custom\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.294836 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-internal-tls-certs\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.294872 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-combined-ca-bundle\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.301854 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58d8667c5c-dbf84" event={"ID":"fa92b339-0782-432a-a352-5a0718033683","Type":"ContainerStarted","Data":"f9ef660830ab2e29d9a6481a632545ad6df4d14ce79ad116ba6ea18848ed36a0"} Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.301929 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58d8667c5c-dbf84" event={"ID":"fa92b339-0782-432a-a352-5a0718033683","Type":"ContainerStarted","Data":"e463d1a89906b050dfd90b597b074ac0e2ff7991c84b20884a6a5f96d6522951"} Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.306528 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" event={"ID":"fc5e62c6-2df3-4629-831b-a2342fef2343","Type":"ContainerStarted","Data":"e7054546c3cb89eb1b6feecf0ca71eee93f3045273f871096c9587812b4ed933"} Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.306570 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" event={"ID":"fc5e62c6-2df3-4629-831b-a2342fef2343","Type":"ContainerStarted","Data":"04292102d8cf91e4aa8f925e56d7bec1d71436ea12d4dd145411db0f3366dce3"} Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.314309 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-fm462" event={"ID":"99dbf7f0-0e13-422a-bf7d-4060cc043b06","Type":"ContainerStarted","Data":"cd3536548f888fccd268a5f94fdada0b9e621761a90ab63a23b09e2a29f3a4ac"} Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.314491 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.330429 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-58d8667c5c-dbf84" podStartSLOduration=3.155485814 podStartE2EDuration="7.330408355s" podCreationTimestamp="2025-10-06 12:03:59 +0000 UTC" firstStartedPulling="2025-10-06 12:04:01.195208317 +0000 UTC m=+1128.607900490" lastFinishedPulling="2025-10-06 12:04:05.370130858 +0000 UTC m=+1132.782823031" observedRunningTime="2025-10-06 12:04:06.327224087 +0000 UTC m=+1133.739916260" watchObservedRunningTime="2025-10-06 12:04:06.330408355 +0000 UTC m=+1133.743100528" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.352499 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-fm462" podStartSLOduration=5.352474694 podStartE2EDuration="5.352474694s" podCreationTimestamp="2025-10-06 12:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:06.348168389 +0000 UTC m=+1133.760860562" watchObservedRunningTime="2025-10-06 12:04:06.352474694 +0000 UTC m=+1133.765166867" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.383172 4698 generic.go:334] "Generic (PLEG): container finished" podID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerID="8c1e954d910aac1280114fe56b674d776984fd67a4f198347c001010d95a1dfa" exitCode=0 Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.383226 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7fb1575-bbc3-4d9f-a0ce-31652f935cac","Type":"ContainerDied","Data":"8c1e954d910aac1280114fe56b674d776984fd67a4f198347c001010d95a1dfa"} Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.385579 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-746dff6454-x5fd6" podStartSLOduration=3.03871745 podStartE2EDuration="7.385568282s" podCreationTimestamp="2025-10-06 12:03:59 +0000 UTC" firstStartedPulling="2025-10-06 12:04:01.02258811 +0000 UTC m=+1128.435280273" lastFinishedPulling="2025-10-06 12:04:05.369438932 +0000 UTC m=+1132.782131105" observedRunningTime="2025-10-06 12:04:06.372920303 +0000 UTC m=+1133.785612476" watchObservedRunningTime="2025-10-06 12:04:06.385568282 +0000 UTC m=+1133.798260455" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.397955 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-config-data\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.398028 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-config-data-custom\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.398129 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-internal-tls-certs\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.398217 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-combined-ca-bundle\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.398289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b610ef-3459-4cf9-9328-d1f95d01be7a-logs\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.398348 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm2bp\" (UniqueName: \"kubernetes.io/projected/61b610ef-3459-4cf9-9328-d1f95d01be7a-kube-api-access-lm2bp\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.398424 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-public-tls-certs\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.401154 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61b610ef-3459-4cf9-9328-d1f95d01be7a-logs\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.404726 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-config-data-custom\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.405200 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-public-tls-certs\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.413230 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-combined-ca-bundle\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.416696 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-internal-tls-certs\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.422467 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm2bp\" (UniqueName: \"kubernetes.io/projected/61b610ef-3459-4cf9-9328-d1f95d01be7a-kube-api-access-lm2bp\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.432515 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61b610ef-3459-4cf9-9328-d1f95d01be7a-config-data\") pod \"barbican-api-6857b4f974-dqhrx\" (UID: \"61b610ef-3459-4cf9-9328-d1f95d01be7a\") " pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.508832 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.566042 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.715385 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fkgk\" (UniqueName: \"kubernetes.io/projected/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-kube-api-access-7fkgk\") pod \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.716069 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-scripts\") pod \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.716118 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-run-httpd\") pod \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.716143 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-combined-ca-bundle\") pod \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.716240 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-sg-core-conf-yaml\") pod \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.716368 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-config-data\") pod \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.716393 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-log-httpd\") pod \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\" (UID: \"a7fb1575-bbc3-4d9f-a0ce-31652f935cac\") " Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.717419 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a7fb1575-bbc3-4d9f-a0ce-31652f935cac" (UID: "a7fb1575-bbc3-4d9f-a0ce-31652f935cac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.717718 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a7fb1575-bbc3-4d9f-a0ce-31652f935cac" (UID: "a7fb1575-bbc3-4d9f-a0ce-31652f935cac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.729919 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-scripts" (OuterVolumeSpecName: "scripts") pod "a7fb1575-bbc3-4d9f-a0ce-31652f935cac" (UID: "a7fb1575-bbc3-4d9f-a0ce-31652f935cac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.730221 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-kube-api-access-7fkgk" (OuterVolumeSpecName: "kube-api-access-7fkgk") pod "a7fb1575-bbc3-4d9f-a0ce-31652f935cac" (UID: "a7fb1575-bbc3-4d9f-a0ce-31652f935cac"). InnerVolumeSpecName "kube-api-access-7fkgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.820517 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.820990 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fkgk\" (UniqueName: \"kubernetes.io/projected/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-kube-api-access-7fkgk\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.821000 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.821025 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.826977 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a7fb1575-bbc3-4d9f-a0ce-31652f935cac" (UID: "a7fb1575-bbc3-4d9f-a0ce-31652f935cac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.845132 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7fb1575-bbc3-4d9f-a0ce-31652f935cac" (UID: "a7fb1575-bbc3-4d9f-a0ce-31652f935cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.888979 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-config-data" (OuterVolumeSpecName: "config-data") pod "a7fb1575-bbc3-4d9f-a0ce-31652f935cac" (UID: "a7fb1575-bbc3-4d9f-a0ce-31652f935cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.933223 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.933582 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:06 crc kubenswrapper[4698]: I1006 12:04:06.933684 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7fb1575-bbc3-4d9f-a0ce-31652f935cac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.183387 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6857b4f974-dqhrx"] Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.407552 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.407545 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7fb1575-bbc3-4d9f-a0ce-31652f935cac","Type":"ContainerDied","Data":"c0891cfb60f91378cda01190c426d7e0595433fd8d58edc676f42e033dfc7e7b"} Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.408156 4698 scope.go:117] "RemoveContainer" containerID="fe9b820ca93aa6ac1f888d3e64f06a0407d1710f729e45c84d7cfa9bb32e9b65" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.431002 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63e5296b-96f5-450b-ab6f-1286e50651b9","Type":"ContainerStarted","Data":"fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4"} Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.431209 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="63e5296b-96f5-450b-ab6f-1286e50651b9" containerName="cinder-api-log" containerID="cri-o://56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27" gracePeriod=30 Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.431337 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.431370 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="63e5296b-96f5-450b-ab6f-1286e50651b9" containerName="cinder-api" containerID="cri-o://fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4" gracePeriod=30 Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.441878 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b7b69092-55b3-453f-9a21-c0824d0f6314","Type":"ContainerStarted","Data":"f286985b4b5620bed9dcb4ffea805bbef89fe3be1841367b5cd7ddedaefdbf6c"} Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.446121 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6857b4f974-dqhrx" event={"ID":"61b610ef-3459-4cf9-9328-d1f95d01be7a","Type":"ContainerStarted","Data":"c4fe66c4e35f59e4f42d978089eb37c1125762ad3a5849a6daa21091e0301779"} Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.531715 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.562002 4698 scope.go:117] "RemoveContainer" containerID="c3a0794c7abdfb84e425413038c1ad0e404152b53fbacea6e014ba48e15726fe" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.566493 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.593452 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.593424128 podStartE2EDuration="6.593424128s" podCreationTimestamp="2025-10-06 12:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:07.481090153 +0000 UTC m=+1134.893782316" watchObservedRunningTime="2025-10-06 12:04:07.593424128 +0000 UTC m=+1135.006116301" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.648105 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:07 crc kubenswrapper[4698]: E1006 12:04:07.649384 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="ceilometer-notification-agent" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.649411 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="ceilometer-notification-agent" Oct 06 12:04:07 crc kubenswrapper[4698]: E1006 12:04:07.649445 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="proxy-httpd" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.649453 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="proxy-httpd" Oct 06 12:04:07 crc kubenswrapper[4698]: E1006 12:04:07.649483 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="sg-core" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.649490 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="sg-core" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.650040 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="ceilometer-notification-agent" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.650090 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="sg-core" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.650123 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" containerName="proxy-httpd" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.664323 4698 scope.go:117] "RemoveContainer" containerID="8c1e954d910aac1280114fe56b674d776984fd67a4f198347c001010d95a1dfa" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.696563 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.696756 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.704769 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.704997 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.891243 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.891593 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-config-data\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.891626 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-scripts\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.891659 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-log-httpd\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.891702 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-run-httpd\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.891727 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9b8t\" (UniqueName: \"kubernetes.io/projected/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-kube-api-access-p9b8t\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.891816 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.993643 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.993698 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-config-data\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.993740 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-scripts\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.993777 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-log-httpd\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.993804 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-run-httpd\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.993824 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9b8t\" (UniqueName: \"kubernetes.io/projected/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-kube-api-access-p9b8t\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:07 crc kubenswrapper[4698]: I1006 12:04:07.993889 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.001454 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-log-httpd\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.002552 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-run-httpd\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.003586 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.005813 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.010479 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-scripts\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.012174 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-config-data\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.018387 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9b8t\" (UniqueName: \"kubernetes.io/projected/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-kube-api-access-p9b8t\") pod \"ceilometer-0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " pod="openstack/ceilometer-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.047078 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.383126 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.470074 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6857b4f974-dqhrx" event={"ID":"61b610ef-3459-4cf9-9328-d1f95d01be7a","Type":"ContainerStarted","Data":"a2d1932c77e32ee2a599f47c73c28575cbbdba47abaa0241f5d7844d798ff3e4"} Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.470131 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6857b4f974-dqhrx" event={"ID":"61b610ef-3459-4cf9-9328-d1f95d01be7a","Type":"ContainerStarted","Data":"5fb760270acbacfd7e49b25b5935a22acd4f8c3d2648a25415ad0344dce2f2da"} Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.470558 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.470608 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.477773 4698 generic.go:334] "Generic (PLEG): container finished" podID="63e5296b-96f5-450b-ab6f-1286e50651b9" containerID="fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4" exitCode=0 Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.477815 4698 generic.go:334] "Generic (PLEG): container finished" podID="63e5296b-96f5-450b-ab6f-1286e50651b9" containerID="56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27" exitCode=143 Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.477860 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63e5296b-96f5-450b-ab6f-1286e50651b9","Type":"ContainerDied","Data":"fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4"} Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.477889 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63e5296b-96f5-450b-ab6f-1286e50651b9","Type":"ContainerDied","Data":"56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27"} Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.477904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63e5296b-96f5-450b-ab6f-1286e50651b9","Type":"ContainerDied","Data":"13a3df10d6de75d09353cf9c32b4f727c7785185ef6b4d7e1dc00db493a3563f"} Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.477923 4698 scope.go:117] "RemoveContainer" containerID="fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.478051 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.481098 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b7b69092-55b3-453f-9a21-c0824d0f6314","Type":"ContainerStarted","Data":"12a2c5795352529a3b91ac1b2edefa479b5fda1a43c8783db1ceb98b7af37ede"} Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.515440 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63e5296b-96f5-450b-ab6f-1286e50651b9-etc-machine-id\") pod \"63e5296b-96f5-450b-ab6f-1286e50651b9\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.515521 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-combined-ca-bundle\") pod \"63e5296b-96f5-450b-ab6f-1286e50651b9\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.515552 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data\") pod \"63e5296b-96f5-450b-ab6f-1286e50651b9\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.515687 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-scripts\") pod \"63e5296b-96f5-450b-ab6f-1286e50651b9\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.515738 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data-custom\") pod \"63e5296b-96f5-450b-ab6f-1286e50651b9\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.515861 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnccb\" (UniqueName: \"kubernetes.io/projected/63e5296b-96f5-450b-ab6f-1286e50651b9-kube-api-access-fnccb\") pod \"63e5296b-96f5-450b-ab6f-1286e50651b9\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.515945 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63e5296b-96f5-450b-ab6f-1286e50651b9-logs\") pod \"63e5296b-96f5-450b-ab6f-1286e50651b9\" (UID: \"63e5296b-96f5-450b-ab6f-1286e50651b9\") " Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.520167 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63e5296b-96f5-450b-ab6f-1286e50651b9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "63e5296b-96f5-450b-ab6f-1286e50651b9" (UID: "63e5296b-96f5-450b-ab6f-1286e50651b9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.521758 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e5296b-96f5-450b-ab6f-1286e50651b9-logs" (OuterVolumeSpecName: "logs") pod "63e5296b-96f5-450b-ab6f-1286e50651b9" (UID: "63e5296b-96f5-450b-ab6f-1286e50651b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.524119 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "63e5296b-96f5-450b-ab6f-1286e50651b9" (UID: "63e5296b-96f5-450b-ab6f-1286e50651b9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.526808 4698 scope.go:117] "RemoveContainer" containerID="56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.527087 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-scripts" (OuterVolumeSpecName: "scripts") pod "63e5296b-96f5-450b-ab6f-1286e50651b9" (UID: "63e5296b-96f5-450b-ab6f-1286e50651b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.528625 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e5296b-96f5-450b-ab6f-1286e50651b9-kube-api-access-fnccb" (OuterVolumeSpecName: "kube-api-access-fnccb") pod "63e5296b-96f5-450b-ab6f-1286e50651b9" (UID: "63e5296b-96f5-450b-ab6f-1286e50651b9"). InnerVolumeSpecName "kube-api-access-fnccb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.551592 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6857b4f974-dqhrx" podStartSLOduration=2.551563001 podStartE2EDuration="2.551563001s" podCreationTimestamp="2025-10-06 12:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:08.497339257 +0000 UTC m=+1135.910031430" watchObservedRunningTime="2025-10-06 12:04:08.551563001 +0000 UTC m=+1135.964255174" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.588378 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.174331641 podStartE2EDuration="7.588351391s" podCreationTimestamp="2025-10-06 12:04:01 +0000 UTC" firstStartedPulling="2025-10-06 12:04:02.95618162 +0000 UTC m=+1130.368873793" lastFinishedPulling="2025-10-06 12:04:05.37020137 +0000 UTC m=+1132.782893543" observedRunningTime="2025-10-06 12:04:08.534809433 +0000 UTC m=+1135.947501606" watchObservedRunningTime="2025-10-06 12:04:08.588351391 +0000 UTC m=+1136.001043564" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.619921 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.619957 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.619970 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnccb\" (UniqueName: \"kubernetes.io/projected/63e5296b-96f5-450b-ab6f-1286e50651b9-kube-api-access-fnccb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.619979 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63e5296b-96f5-450b-ab6f-1286e50651b9-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.619987 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63e5296b-96f5-450b-ab6f-1286e50651b9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.698354 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.702231 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63e5296b-96f5-450b-ab6f-1286e50651b9" (UID: "63e5296b-96f5-450b-ab6f-1286e50651b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.719263 4698 scope.go:117] "RemoveContainer" containerID="fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.728640 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:08 crc kubenswrapper[4698]: E1006 12:04:08.731270 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4\": container with ID starting with fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4 not found: ID does not exist" containerID="fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.731352 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4"} err="failed to get container status \"fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4\": rpc error: code = NotFound desc = could not find container \"fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4\": container with ID starting with fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4 not found: ID does not exist" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.731389 4698 scope.go:117] "RemoveContainer" containerID="56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27" Oct 06 12:04:08 crc kubenswrapper[4698]: E1006 12:04:08.742243 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27\": container with ID starting with 56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27 not found: ID does not exist" containerID="56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.742328 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27"} err="failed to get container status \"56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27\": rpc error: code = NotFound desc = could not find container \"56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27\": container with ID starting with 56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27 not found: ID does not exist" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.742372 4698 scope.go:117] "RemoveContainer" containerID="fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.750200 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4"} err="failed to get container status \"fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4\": rpc error: code = NotFound desc = could not find container \"fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4\": container with ID starting with fbe4535f2e64e76d44065aefc7df44e24e71bca81148bb4d8a9ac9c0b21232e4 not found: ID does not exist" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.750262 4698 scope.go:117] "RemoveContainer" containerID="56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.756824 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27"} err="failed to get container status \"56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27\": rpc error: code = NotFound desc = could not find container \"56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27\": container with ID starting with 56f93e1cf6a883625174734afa27718b372829b6879afda33d25871beae8da27 not found: ID does not exist" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.818352 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data" (OuterVolumeSpecName: "config-data") pod "63e5296b-96f5-450b-ab6f-1286e50651b9" (UID: "63e5296b-96f5-450b-ab6f-1286e50651b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:08 crc kubenswrapper[4698]: I1006 12:04:08.830327 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e5296b-96f5-450b-ab6f-1286e50651b9-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.117154 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.142090 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.185187 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:09 crc kubenswrapper[4698]: E1006 12:04:09.185727 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e5296b-96f5-450b-ab6f-1286e50651b9" containerName="cinder-api" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.185748 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e5296b-96f5-450b-ab6f-1286e50651b9" containerName="cinder-api" Oct 06 12:04:09 crc kubenswrapper[4698]: E1006 12:04:09.185788 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e5296b-96f5-450b-ab6f-1286e50651b9" containerName="cinder-api-log" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.185794 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e5296b-96f5-450b-ab6f-1286e50651b9" containerName="cinder-api-log" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.186050 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e5296b-96f5-450b-ab6f-1286e50651b9" containerName="cinder-api-log" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.186064 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e5296b-96f5-450b-ab6f-1286e50651b9" containerName="cinder-api" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.187258 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.194883 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.195187 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.195390 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.211362 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.346588 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5496d3c-491b-4f5d-8351-2e7eac348fd2-logs\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.346654 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-config-data-custom\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.346705 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.346755 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-scripts\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.346787 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-config-data\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.346812 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5496d3c-491b-4f5d-8351-2e7eac348fd2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.346831 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.346862 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8hmf\" (UniqueName: \"kubernetes.io/projected/b5496d3c-491b-4f5d-8351-2e7eac348fd2-kube-api-access-s8hmf\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.346883 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.350826 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e5296b-96f5-450b-ab6f-1286e50651b9" path="/var/lib/kubelet/pods/63e5296b-96f5-450b-ab6f-1286e50651b9/volumes" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.351797 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7fb1575-bbc3-4d9f-a0ce-31652f935cac" path="/var/lib/kubelet/pods/a7fb1575-bbc3-4d9f-a0ce-31652f935cac/volumes" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.448882 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5496d3c-491b-4f5d-8351-2e7eac348fd2-logs\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.449029 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-config-data-custom\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.449127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.449229 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-scripts\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.449267 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-config-data\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.449305 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5496d3c-491b-4f5d-8351-2e7eac348fd2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.449343 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.449388 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8hmf\" (UniqueName: \"kubernetes.io/projected/b5496d3c-491b-4f5d-8351-2e7eac348fd2-kube-api-access-s8hmf\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.449443 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.449492 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5496d3c-491b-4f5d-8351-2e7eac348fd2-logs\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.449916 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5496d3c-491b-4f5d-8351-2e7eac348fd2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.456151 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-config-data\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.456942 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.457502 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-config-data-custom\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.458681 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.459275 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-scripts\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.468003 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5496d3c-491b-4f5d-8351-2e7eac348fd2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.475562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8hmf\" (UniqueName: \"kubernetes.io/projected/b5496d3c-491b-4f5d-8351-2e7eac348fd2-kube-api-access-s8hmf\") pod \"cinder-api-0\" (UID: \"b5496d3c-491b-4f5d-8351-2e7eac348fd2\") " pod="openstack/cinder-api-0" Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.494700 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0","Type":"ContainerStarted","Data":"eb0cadc440a44af9863b091e53246309e6f4ecee02e2d81f636e41e6f53b7b67"} Oct 06 12:04:09 crc kubenswrapper[4698]: I1006 12:04:09.508944 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 12:04:10 crc kubenswrapper[4698]: I1006 12:04:10.003730 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 12:04:10 crc kubenswrapper[4698]: W1006 12:04:10.012884 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5496d3c_491b_4f5d_8351_2e7eac348fd2.slice/crio-b5cbdee1229d55407bf636a47774688abcc09c2e0b362b22d43626bb94a77adb WatchSource:0}: Error finding container b5cbdee1229d55407bf636a47774688abcc09c2e0b362b22d43626bb94a77adb: Status 404 returned error can't find the container with id b5cbdee1229d55407bf636a47774688abcc09c2e0b362b22d43626bb94a77adb Oct 06 12:04:10 crc kubenswrapper[4698]: I1006 12:04:10.209848 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-c87999589-tj5hk" Oct 06 12:04:10 crc kubenswrapper[4698]: I1006 12:04:10.303138 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-798cdc9cb4-kt9cg"] Oct 06 12:04:10 crc kubenswrapper[4698]: I1006 12:04:10.303749 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-798cdc9cb4-kt9cg" podUID="71c9462d-5711-493d-ad40-ac0e5ff9d037" containerName="neutron-api" containerID="cri-o://85553e367074951654f57cb2480328cbb9daebbf19074de634ae3d05627cf049" gracePeriod=30 Oct 06 12:04:10 crc kubenswrapper[4698]: I1006 12:04:10.304152 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-798cdc9cb4-kt9cg" podUID="71c9462d-5711-493d-ad40-ac0e5ff9d037" containerName="neutron-httpd" containerID="cri-o://0d805ab4ee4c425219de6611052e93ca87eca2634313a8be38c4af653a5dfec6" gracePeriod=30 Oct 06 12:04:10 crc kubenswrapper[4698]: I1006 12:04:10.537328 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5496d3c-491b-4f5d-8351-2e7eac348fd2","Type":"ContainerStarted","Data":"b5cbdee1229d55407bf636a47774688abcc09c2e0b362b22d43626bb94a77adb"} Oct 06 12:04:10 crc kubenswrapper[4698]: I1006 12:04:10.542808 4698 generic.go:334] "Generic (PLEG): container finished" podID="71c9462d-5711-493d-ad40-ac0e5ff9d037" containerID="0d805ab4ee4c425219de6611052e93ca87eca2634313a8be38c4af653a5dfec6" exitCode=0 Oct 06 12:04:10 crc kubenswrapper[4698]: I1006 12:04:10.542854 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798cdc9cb4-kt9cg" event={"ID":"71c9462d-5711-493d-ad40-ac0e5ff9d037","Type":"ContainerDied","Data":"0d805ab4ee4c425219de6611052e93ca87eca2634313a8be38c4af653a5dfec6"} Oct 06 12:04:11 crc kubenswrapper[4698]: I1006 12:04:11.564614 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0","Type":"ContainerStarted","Data":"66aa7276da3dfc405f80396bfaed8a3b72dc3e145d6f6bda11a5e3d128440846"} Oct 06 12:04:11 crc kubenswrapper[4698]: I1006 12:04:11.573763 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5496d3c-491b-4f5d-8351-2e7eac348fd2","Type":"ContainerStarted","Data":"a9afda0f0dd722116c77dc0dbd5e533f7cfca9237abe95594a3e0bddadf4d05c"} Oct 06 12:04:11 crc kubenswrapper[4698]: I1006 12:04:11.996560 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.002486 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-667d6544d-8ddpx" Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.098211 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.184554 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-smmk4"] Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.184947 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" podUID="ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" containerName="dnsmasq-dns" containerID="cri-o://3bd95566480f3aed7a379dc013d8548d6081def95e72e8e8a931980de82ca39b" gracePeriod=10 Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.284326 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.407185 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.620350 4698 generic.go:334] "Generic (PLEG): container finished" podID="ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" containerID="3bd95566480f3aed7a379dc013d8548d6081def95e72e8e8a931980de82ca39b" exitCode=0 Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.620447 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" event={"ID":"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1","Type":"ContainerDied","Data":"3bd95566480f3aed7a379dc013d8548d6081def95e72e8e8a931980de82ca39b"} Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.633753 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0","Type":"ContainerStarted","Data":"6d9135d7615114234ddd22f25832f22c5a4b3c1db59554aefb1312d956cfde77"} Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.657259 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5496d3c-491b-4f5d-8351-2e7eac348fd2","Type":"ContainerStarted","Data":"58bc3d2c7fc0690521673837dc86159cd7da0ce52019c577a2858e05653e8f5b"} Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.657745 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.728884 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.728860051 podStartE2EDuration="3.728860051s" podCreationTimestamp="2025-10-06 12:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:12.689493569 +0000 UTC m=+1140.102185742" watchObservedRunningTime="2025-10-06 12:04:12.728860051 +0000 UTC m=+1140.141552224" Oct 06 12:04:12 crc kubenswrapper[4698]: I1006 12:04:12.883941 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.002335 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.098111 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.189805 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-nb\") pod \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.189870 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-swift-storage-0\") pod \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.190095 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg9j6\" (UniqueName: \"kubernetes.io/projected/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-kube-api-access-tg9j6\") pod \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.190141 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-sb\") pod \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.190194 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-svc\") pod \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.190220 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-config\") pod \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\" (UID: \"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1\") " Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.204648 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-kube-api-access-tg9j6" (OuterVolumeSpecName: "kube-api-access-tg9j6") pod "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" (UID: "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1"). InnerVolumeSpecName "kube-api-access-tg9j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.253894 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.304364 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg9j6\" (UniqueName: \"kubernetes.io/projected/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-kube-api-access-tg9j6\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.349442 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" (UID: "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.357533 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-config" (OuterVolumeSpecName: "config") pod "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" (UID: "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.391706 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" (UID: "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.395400 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" (UID: "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.405655 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" (UID: "ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.407505 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.407529 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.407546 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.407560 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.407573 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.678551 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.678562 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-smmk4" event={"ID":"ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1","Type":"ContainerDied","Data":"4d0ab84cd9da2c872657c462d78e09e63e8b4b681696459a7404cc64ba537b2c"} Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.678663 4698 scope.go:117] "RemoveContainer" containerID="3bd95566480f3aed7a379dc013d8548d6081def95e72e8e8a931980de82ca39b" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.686299 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0","Type":"ContainerStarted","Data":"fedea8abb60bb1fa0b5eb640d9cdb5098a60d8a9898e38666fdc0cf2b58db996"} Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.686251 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b7b69092-55b3-453f-9a21-c0824d0f6314" containerName="cinder-scheduler" containerID="cri-o://f286985b4b5620bed9dcb4ffea805bbef89fe3be1841367b5cd7ddedaefdbf6c" gracePeriod=30 Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.686397 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b7b69092-55b3-453f-9a21-c0824d0f6314" containerName="probe" containerID="cri-o://12a2c5795352529a3b91ac1b2edefa479b5fda1a43c8783db1ceb98b7af37ede" gracePeriod=30 Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.725586 4698 scope.go:117] "RemoveContainer" containerID="445034ced48b5fd303fb9dea7f2c9fb3f8fdf5a390a71671459dbaed5c374e8f" Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.729871 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-smmk4"] Oct 06 12:04:13 crc kubenswrapper[4698]: I1006 12:04:13.736784 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-smmk4"] Oct 06 12:04:14 crc kubenswrapper[4698]: I1006 12:04:14.716317 4698 generic.go:334] "Generic (PLEG): container finished" podID="b7b69092-55b3-453f-9a21-c0824d0f6314" containerID="12a2c5795352529a3b91ac1b2edefa479b5fda1a43c8783db1ceb98b7af37ede" exitCode=0 Oct 06 12:04:14 crc kubenswrapper[4698]: I1006 12:04:14.716503 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b7b69092-55b3-453f-9a21-c0824d0f6314","Type":"ContainerDied","Data":"12a2c5795352529a3b91ac1b2edefa479b5fda1a43c8783db1ceb98b7af37ede"} Oct 06 12:04:14 crc kubenswrapper[4698]: E1006 12:04:14.720276 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7b69092_55b3_453f_9a21_c0824d0f6314.slice/crio-conmon-12a2c5795352529a3b91ac1b2edefa479b5fda1a43c8783db1ceb98b7af37ede.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:04:14 crc kubenswrapper[4698]: I1006 12:04:14.723422 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0","Type":"ContainerStarted","Data":"1aceb51e3bf9c9d97da734b7dfecdda0421aa05cb446c9ed75bc391983866f1d"} Oct 06 12:04:14 crc kubenswrapper[4698]: I1006 12:04:14.725193 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.081537 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-849d766464-jl8th" Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.115695 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.972519215 podStartE2EDuration="8.115665466s" podCreationTimestamp="2025-10-06 12:04:07 +0000 UTC" firstStartedPulling="2025-10-06 12:04:08.816449375 +0000 UTC m=+1136.229141548" lastFinishedPulling="2025-10-06 12:04:13.959595626 +0000 UTC m=+1141.372287799" observedRunningTime="2025-10-06 12:04:14.746545346 +0000 UTC m=+1142.159237519" watchObservedRunningTime="2025-10-06 12:04:15.115665466 +0000 UTC m=+1142.528357679" Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.175813 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.342709 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" path="/var/lib/kubelet/pods/ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1/volumes" Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.751998 4698 generic.go:334] "Generic (PLEG): container finished" podID="b7b69092-55b3-453f-9a21-c0824d0f6314" containerID="f286985b4b5620bed9dcb4ffea805bbef89fe3be1841367b5cd7ddedaefdbf6c" exitCode=0 Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.753656 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b7b69092-55b3-453f-9a21-c0824d0f6314","Type":"ContainerDied","Data":"f286985b4b5620bed9dcb4ffea805bbef89fe3be1841367b5cd7ddedaefdbf6c"} Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.952956 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.999104 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7b69092-55b3-453f-9a21-c0824d0f6314-etc-machine-id\") pod \"b7b69092-55b3-453f-9a21-c0824d0f6314\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.999216 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b69092-55b3-453f-9a21-c0824d0f6314-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b7b69092-55b3-453f-9a21-c0824d0f6314" (UID: "b7b69092-55b3-453f-9a21-c0824d0f6314"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.999270 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw5r4\" (UniqueName: \"kubernetes.io/projected/b7b69092-55b3-453f-9a21-c0824d0f6314-kube-api-access-gw5r4\") pod \"b7b69092-55b3-453f-9a21-c0824d0f6314\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.999407 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-combined-ca-bundle\") pod \"b7b69092-55b3-453f-9a21-c0824d0f6314\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.999541 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-scripts\") pod \"b7b69092-55b3-453f-9a21-c0824d0f6314\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.999596 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data-custom\") pod \"b7b69092-55b3-453f-9a21-c0824d0f6314\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " Oct 06 12:04:15 crc kubenswrapper[4698]: I1006 12:04:15.999632 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data\") pod \"b7b69092-55b3-453f-9a21-c0824d0f6314\" (UID: \"b7b69092-55b3-453f-9a21-c0824d0f6314\") " Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.000160 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7b69092-55b3-453f-9a21-c0824d0f6314-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.007763 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b7b69092-55b3-453f-9a21-c0824d0f6314" (UID: "b7b69092-55b3-453f-9a21-c0824d0f6314"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.014829 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b69092-55b3-453f-9a21-c0824d0f6314-kube-api-access-gw5r4" (OuterVolumeSpecName: "kube-api-access-gw5r4") pod "b7b69092-55b3-453f-9a21-c0824d0f6314" (UID: "b7b69092-55b3-453f-9a21-c0824d0f6314"). InnerVolumeSpecName "kube-api-access-gw5r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.026740 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-scripts" (OuterVolumeSpecName: "scripts") pod "b7b69092-55b3-453f-9a21-c0824d0f6314" (UID: "b7b69092-55b3-453f-9a21-c0824d0f6314"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.103249 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw5r4\" (UniqueName: \"kubernetes.io/projected/b7b69092-55b3-453f-9a21-c0824d0f6314-kube-api-access-gw5r4\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.103288 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.103300 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.130143 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7b69092-55b3-453f-9a21-c0824d0f6314" (UID: "b7b69092-55b3-453f-9a21-c0824d0f6314"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.188411 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data" (OuterVolumeSpecName: "config-data") pod "b7b69092-55b3-453f-9a21-c0824d0f6314" (UID: "b7b69092-55b3-453f-9a21-c0824d0f6314"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.205321 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.205360 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b69092-55b3-453f-9a21-c0824d0f6314-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.779703 4698 generic.go:334] "Generic (PLEG): container finished" podID="71c9462d-5711-493d-ad40-ac0e5ff9d037" containerID="85553e367074951654f57cb2480328cbb9daebbf19074de634ae3d05627cf049" exitCode=0 Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.780220 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798cdc9cb4-kt9cg" event={"ID":"71c9462d-5711-493d-ad40-ac0e5ff9d037","Type":"ContainerDied","Data":"85553e367074951654f57cb2480328cbb9daebbf19074de634ae3d05627cf049"} Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.794229 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b7b69092-55b3-453f-9a21-c0824d0f6314","Type":"ContainerDied","Data":"387b3970319a1121fc7de76da96bbb8bc4ed063c11ff181dfce59286f344a07b"} Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.794304 4698 scope.go:117] "RemoveContainer" containerID="12a2c5795352529a3b91ac1b2edefa479b5fda1a43c8783db1ceb98b7af37ede" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.794554 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.871957 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.875106 4698 scope.go:117] "RemoveContainer" containerID="f286985b4b5620bed9dcb4ffea805bbef89fe3be1841367b5cd7ddedaefdbf6c" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.885469 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.925173 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:16 crc kubenswrapper[4698]: E1006 12:04:16.925925 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b69092-55b3-453f-9a21-c0824d0f6314" containerName="cinder-scheduler" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.925953 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b69092-55b3-453f-9a21-c0824d0f6314" containerName="cinder-scheduler" Oct 06 12:04:16 crc kubenswrapper[4698]: E1006 12:04:16.925969 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" containerName="init" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.925987 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" containerName="init" Oct 06 12:04:16 crc kubenswrapper[4698]: E1006 12:04:16.926036 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" containerName="dnsmasq-dns" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.926043 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" containerName="dnsmasq-dns" Oct 06 12:04:16 crc kubenswrapper[4698]: E1006 12:04:16.926081 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b69092-55b3-453f-9a21-c0824d0f6314" containerName="probe" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.926088 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b69092-55b3-453f-9a21-c0824d0f6314" containerName="probe" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.926312 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab4c28e0-c76b-4ee0-8bed-cbea4379cdf1" containerName="dnsmasq-dns" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.926341 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b69092-55b3-453f-9a21-c0824d0f6314" containerName="probe" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.926355 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b69092-55b3-453f-9a21-c0824d0f6314" containerName="cinder-scheduler" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.927756 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.931101 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.950430 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:16 crc kubenswrapper[4698]: I1006 12:04:16.994635 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.130270 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6btz\" (UniqueName: \"kubernetes.io/projected/71c9462d-5711-493d-ad40-ac0e5ff9d037-kube-api-access-f6btz\") pod \"71c9462d-5711-493d-ad40-ac0e5ff9d037\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.131119 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-combined-ca-bundle\") pod \"71c9462d-5711-493d-ad40-ac0e5ff9d037\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.131403 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-config\") pod \"71c9462d-5711-493d-ad40-ac0e5ff9d037\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.131605 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-httpd-config\") pod \"71c9462d-5711-493d-ad40-ac0e5ff9d037\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.131635 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-ovndb-tls-certs\") pod \"71c9462d-5711-493d-ad40-ac0e5ff9d037\" (UID: \"71c9462d-5711-493d-ad40-ac0e5ff9d037\") " Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.132224 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd84c444-81fa-4206-8517-a25ba61c7209-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.132353 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-scripts\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.132390 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd2sr\" (UniqueName: \"kubernetes.io/projected/cd84c444-81fa-4206-8517-a25ba61c7209-kube-api-access-xd2sr\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.132607 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-config-data\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.132689 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.132719 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.142627 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "71c9462d-5711-493d-ad40-ac0e5ff9d037" (UID: "71c9462d-5711-493d-ad40-ac0e5ff9d037"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.165578 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c9462d-5711-493d-ad40-ac0e5ff9d037-kube-api-access-f6btz" (OuterVolumeSpecName: "kube-api-access-f6btz") pod "71c9462d-5711-493d-ad40-ac0e5ff9d037" (UID: "71c9462d-5711-493d-ad40-ac0e5ff9d037"). InnerVolumeSpecName "kube-api-access-f6btz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.221167 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-config" (OuterVolumeSpecName: "config") pod "71c9462d-5711-493d-ad40-ac0e5ff9d037" (UID: "71c9462d-5711-493d-ad40-ac0e5ff9d037"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.239325 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-config-data\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.240340 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.240397 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.240424 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd84c444-81fa-4206-8517-a25ba61c7209-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.240507 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-scripts\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.241091 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd2sr\" (UniqueName: \"kubernetes.io/projected/cd84c444-81fa-4206-8517-a25ba61c7209-kube-api-access-xd2sr\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.241260 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.241272 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.241283 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6btz\" (UniqueName: \"kubernetes.io/projected/71c9462d-5711-493d-ad40-ac0e5ff9d037-kube-api-access-f6btz\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.248058 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd84c444-81fa-4206-8517-a25ba61c7209-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.258365 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.258710 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71c9462d-5711-493d-ad40-ac0e5ff9d037" (UID: "71c9462d-5711-493d-ad40-ac0e5ff9d037"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.260127 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.269314 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-config-data\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.269946 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd2sr\" (UniqueName: \"kubernetes.io/projected/cd84c444-81fa-4206-8517-a25ba61c7209-kube-api-access-xd2sr\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.272546 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd84c444-81fa-4206-8517-a25ba61c7209-scripts\") pod \"cinder-scheduler-0\" (UID: \"cd84c444-81fa-4206-8517-a25ba61c7209\") " pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.307664 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "71c9462d-5711-493d-ad40-ac0e5ff9d037" (UID: "71c9462d-5711-493d-ad40-ac0e5ff9d037"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.314273 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.343848 4698 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.343885 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c9462d-5711-493d-ad40-ac0e5ff9d037-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.361034 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b69092-55b3-453f-9a21-c0824d0f6314" path="/var/lib/kubelet/pods/b7b69092-55b3-453f-9a21-c0824d0f6314/volumes" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.707352 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.824668 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798cdc9cb4-kt9cg" event={"ID":"71c9462d-5711-493d-ad40-ac0e5ff9d037","Type":"ContainerDied","Data":"f25007dbd98f675c276b13647d55d26bdb21c69e555ab82adbbe194323b2ef64"} Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.824764 4698 scope.go:117] "RemoveContainer" containerID="0d805ab4ee4c425219de6611052e93ca87eca2634313a8be38c4af653a5dfec6" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.824769 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798cdc9cb4-kt9cg" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.841071 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd84c444-81fa-4206-8517-a25ba61c7209","Type":"ContainerStarted","Data":"1baa747ae56b6839784e2002de469414bf992104501e7dcef99d5dab92ce60e0"} Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.862131 4698 scope.go:117] "RemoveContainer" containerID="85553e367074951654f57cb2480328cbb9daebbf19074de634ae3d05627cf049" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.874157 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-798cdc9cb4-kt9cg"] Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.885817 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-798cdc9cb4-kt9cg"] Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.902596 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-658b97bb55-lp7jm" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.958126 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-849d766464-jl8th" Oct 06 12:04:17 crc kubenswrapper[4698]: I1006 12:04:17.998856 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.065893 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-654cf8498d-s5tdp"] Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.267603 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.584908 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6857b4f974-dqhrx" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.673713 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b578fc998-97xd7"] Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.673945 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b578fc998-97xd7" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerName="barbican-api-log" containerID="cri-o://0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e" gracePeriod=30 Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.674402 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b578fc998-97xd7" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerName="barbican-api" containerID="cri-o://b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385" gracePeriod=30 Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.752457 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 06 12:04:18 crc kubenswrapper[4698]: E1006 12:04:18.753700 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c9462d-5711-493d-ad40-ac0e5ff9d037" containerName="neutron-httpd" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.753723 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c9462d-5711-493d-ad40-ac0e5ff9d037" containerName="neutron-httpd" Oct 06 12:04:18 crc kubenswrapper[4698]: E1006 12:04:18.753763 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c9462d-5711-493d-ad40-ac0e5ff9d037" containerName="neutron-api" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.753769 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c9462d-5711-493d-ad40-ac0e5ff9d037" containerName="neutron-api" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.753940 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c9462d-5711-493d-ad40-ac0e5ff9d037" containerName="neutron-httpd" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.753968 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c9462d-5711-493d-ad40-ac0e5ff9d037" containerName="neutron-api" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.754695 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.757435 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.763232 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vtsp8" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.763509 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.786797 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.887243 4698 generic.go:334] "Generic (PLEG): container finished" podID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerID="0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e" exitCode=143 Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.887607 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b578fc998-97xd7" event={"ID":"b0844339-78f0-44dc-bef5-dcc9d46ff389","Type":"ContainerDied","Data":"0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e"} Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.898330 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd84c444-81fa-4206-8517-a25ba61c7209","Type":"ContainerStarted","Data":"ded91620ab7d04efddcdb373489edc865deddf3a343a1e2bba34a9263f4f2a76"} Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.900623 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-654cf8498d-s5tdp" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon" containerID="cri-o://6ac119b788d458fe7ac1c4ff4e0504c0c1bcf69720b6056366a73205c669289d" gracePeriod=30 Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.900782 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-654cf8498d-s5tdp" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon-log" containerID="cri-o://5fd6a5bd12a195df40342fdf1541abf8c63ee4e04bcc6f0c8b900c346830590c" gracePeriod=30 Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.905528 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-openstack-config\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.905617 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.905786 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-openstack-config-secret\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:18 crc kubenswrapper[4698]: I1006 12:04:18.905861 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwm4h\" (UniqueName: \"kubernetes.io/projected/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-kube-api-access-wwm4h\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.009396 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-openstack-config\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.009449 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.009515 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-openstack-config-secret\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.009541 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwm4h\" (UniqueName: \"kubernetes.io/projected/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-kube-api-access-wwm4h\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.014557 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-openstack-config\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.025949 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.026233 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-openstack-config-secret\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.036623 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwm4h\" (UniqueName: \"kubernetes.io/projected/832ec6ae-a05c-4838-93d2-8957d3dcdc6a-kube-api-access-wwm4h\") pod \"openstackclient\" (UID: \"832ec6ae-a05c-4838-93d2-8957d3dcdc6a\") " pod="openstack/openstackclient" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.087259 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.364171 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c9462d-5711-493d-ad40-ac0e5ff9d037" path="/var/lib/kubelet/pods/71c9462d-5711-493d-ad40-ac0e5ff9d037/volumes" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.524673 4698 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf85e7a86-219c-4d1e-922c-8d8f4fec787d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf85e7a86-219c-4d1e-922c-8d8f4fec787d] : Timed out while waiting for systemd to remove kubepods-besteffort-podf85e7a86_219c_4d1e_922c_8d8f4fec787d.slice" Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.620456 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.939243 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd84c444-81fa-4206-8517-a25ba61c7209","Type":"ContainerStarted","Data":"1d3b2dec9188d99456e5cbab800c1edddffab83465697a9051aa3d07cf5bcd3d"} Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.946348 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"832ec6ae-a05c-4838-93d2-8957d3dcdc6a","Type":"ContainerStarted","Data":"0f2c9ed8c92ee303b88aaaa24dd80286d1aa051053e94104824d85c5718d8948"} Oct 06 12:04:19 crc kubenswrapper[4698]: I1006 12:04:19.991514 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.991477134 podStartE2EDuration="3.991477134s" podCreationTimestamp="2025-10-06 12:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:19.977602535 +0000 UTC m=+1147.390294708" watchObservedRunningTime="2025-10-06 12:04:19.991477134 +0000 UTC m=+1147.404169307" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.129218 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b578fc998-97xd7" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:37988->10.217.0.179:9311: read: connection reset by peer" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.129440 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b578fc998-97xd7" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:37984->10.217.0.179:9311: read: connection reset by peer" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.267320 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.315292 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.434824 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-654cf8498d-s5tdp" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.773196 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.944562 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scbw8\" (UniqueName: \"kubernetes.io/projected/b0844339-78f0-44dc-bef5-dcc9d46ff389-kube-api-access-scbw8\") pod \"b0844339-78f0-44dc-bef5-dcc9d46ff389\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.944642 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0844339-78f0-44dc-bef5-dcc9d46ff389-logs\") pod \"b0844339-78f0-44dc-bef5-dcc9d46ff389\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.944946 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data\") pod \"b0844339-78f0-44dc-bef5-dcc9d46ff389\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.945039 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data-custom\") pod \"b0844339-78f0-44dc-bef5-dcc9d46ff389\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.945131 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-combined-ca-bundle\") pod \"b0844339-78f0-44dc-bef5-dcc9d46ff389\" (UID: \"b0844339-78f0-44dc-bef5-dcc9d46ff389\") " Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.947733 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0844339-78f0-44dc-bef5-dcc9d46ff389-logs" (OuterVolumeSpecName: "logs") pod "b0844339-78f0-44dc-bef5-dcc9d46ff389" (UID: "b0844339-78f0-44dc-bef5-dcc9d46ff389"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.957619 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b0844339-78f0-44dc-bef5-dcc9d46ff389" (UID: "b0844339-78f0-44dc-bef5-dcc9d46ff389"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.962084 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0844339-78f0-44dc-bef5-dcc9d46ff389-kube-api-access-scbw8" (OuterVolumeSpecName: "kube-api-access-scbw8") pod "b0844339-78f0-44dc-bef5-dcc9d46ff389" (UID: "b0844339-78f0-44dc-bef5-dcc9d46ff389"). InnerVolumeSpecName "kube-api-access-scbw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.990375 4698 generic.go:334] "Generic (PLEG): container finished" podID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerID="6ac119b788d458fe7ac1c4ff4e0504c0c1bcf69720b6056366a73205c669289d" exitCode=0 Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.990426 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-654cf8498d-s5tdp" event={"ID":"18ae0d1c-2545-4122-b2d9-3380fd017840","Type":"ContainerDied","Data":"6ac119b788d458fe7ac1c4ff4e0504c0c1bcf69720b6056366a73205c669289d"} Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.995009 4698 generic.go:334] "Generic (PLEG): container finished" podID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerID="b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385" exitCode=0 Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.995108 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b578fc998-97xd7" event={"ID":"b0844339-78f0-44dc-bef5-dcc9d46ff389","Type":"ContainerDied","Data":"b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385"} Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.995155 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b578fc998-97xd7" event={"ID":"b0844339-78f0-44dc-bef5-dcc9d46ff389","Type":"ContainerDied","Data":"cb4fe6efe747b09511c893b60e4bdfb2f485a87a1c333082227dbf393c0ae234"} Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.995183 4698 scope.go:117] "RemoveContainer" containerID="b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385" Oct 06 12:04:22 crc kubenswrapper[4698]: I1006 12:04:22.995421 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b578fc998-97xd7" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.019957 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0844339-78f0-44dc-bef5-dcc9d46ff389" (UID: "b0844339-78f0-44dc-bef5-dcc9d46ff389"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.022866 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data" (OuterVolumeSpecName: "config-data") pod "b0844339-78f0-44dc-bef5-dcc9d46ff389" (UID: "b0844339-78f0-44dc-bef5-dcc9d46ff389"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.054043 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.054096 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.054111 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0844339-78f0-44dc-bef5-dcc9d46ff389-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.054130 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scbw8\" (UniqueName: \"kubernetes.io/projected/b0844339-78f0-44dc-bef5-dcc9d46ff389-kube-api-access-scbw8\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.054151 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0844339-78f0-44dc-bef5-dcc9d46ff389-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.054348 4698 scope.go:117] "RemoveContainer" containerID="0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.083044 4698 scope.go:117] "RemoveContainer" containerID="b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385" Oct 06 12:04:23 crc kubenswrapper[4698]: E1006 12:04:23.083771 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385\": container with ID starting with b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385 not found: ID does not exist" containerID="b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.083836 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385"} err="failed to get container status \"b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385\": rpc error: code = NotFound desc = could not find container \"b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385\": container with ID starting with b13890858e83db5ab7531ef98a0563976acecd91560e2ca921f05227664c8385 not found: ID does not exist" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.083870 4698 scope.go:117] "RemoveContainer" containerID="0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e" Oct 06 12:04:23 crc kubenswrapper[4698]: E1006 12:04:23.084578 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e\": container with ID starting with 0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e not found: ID does not exist" containerID="0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.084626 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e"} err="failed to get container status \"0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e\": rpc error: code = NotFound desc = could not find container \"0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e\": container with ID starting with 0342dcb88d7eeeb5d9079a1daff8d20fe5a45ed5263206da38d824d95dc4170e not found: ID does not exist" Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.395061 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b578fc998-97xd7"] Oct 06 12:04:23 crc kubenswrapper[4698]: I1006 12:04:23.395101 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b578fc998-97xd7"] Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.343717 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" path="/var/lib/kubelet/pods/b0844339-78f0-44dc-bef5-dcc9d46ff389/volumes" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.533076 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7bd5b9f8ff-k9cfq"] Oct 06 12:04:25 crc kubenswrapper[4698]: E1006 12:04:25.540834 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerName="barbican-api" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.540855 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerName="barbican-api" Oct 06 12:04:25 crc kubenswrapper[4698]: E1006 12:04:25.540911 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerName="barbican-api-log" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.540917 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerName="barbican-api-log" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.541168 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerName="barbican-api" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.541209 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0844339-78f0-44dc-bef5-dcc9d46ff389" containerName="barbican-api-log" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.542610 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.544496 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.545598 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.545842 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.555547 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7bd5b9f8ff-k9cfq"] Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.636174 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqhs5\" (UniqueName: \"kubernetes.io/projected/6900b347-8ed3-4474-b6b1-623471b2a03f-kube-api-access-dqhs5\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.636238 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6900b347-8ed3-4474-b6b1-623471b2a03f-run-httpd\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.636323 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-internal-tls-certs\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.636343 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-public-tls-certs\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.636418 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-config-data\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.636455 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6900b347-8ed3-4474-b6b1-623471b2a03f-etc-swift\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.636469 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6900b347-8ed3-4474-b6b1-623471b2a03f-log-httpd\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.636487 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-combined-ca-bundle\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.739186 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-internal-tls-certs\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.739258 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-public-tls-certs\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.739375 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-config-data\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.739435 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6900b347-8ed3-4474-b6b1-623471b2a03f-etc-swift\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.739465 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6900b347-8ed3-4474-b6b1-623471b2a03f-log-httpd\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.739499 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-combined-ca-bundle\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.739587 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqhs5\" (UniqueName: \"kubernetes.io/projected/6900b347-8ed3-4474-b6b1-623471b2a03f-kube-api-access-dqhs5\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.739616 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6900b347-8ed3-4474-b6b1-623471b2a03f-run-httpd\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.740453 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6900b347-8ed3-4474-b6b1-623471b2a03f-log-httpd\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.740530 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6900b347-8ed3-4474-b6b1-623471b2a03f-run-httpd\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.746599 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-internal-tls-certs\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.748868 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6900b347-8ed3-4474-b6b1-623471b2a03f-etc-swift\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.753642 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-config-data\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.754540 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-public-tls-certs\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.757596 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqhs5\" (UniqueName: \"kubernetes.io/projected/6900b347-8ed3-4474-b6b1-623471b2a03f-kube-api-access-dqhs5\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.766657 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6900b347-8ed3-4474-b6b1-623471b2a03f-combined-ca-bundle\") pod \"swift-proxy-7bd5b9f8ff-k9cfq\" (UID: \"6900b347-8ed3-4474-b6b1-623471b2a03f\") " pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:25 crc kubenswrapper[4698]: I1006 12:04:25.871634 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:26 crc kubenswrapper[4698]: I1006 12:04:26.964461 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:26 crc kubenswrapper[4698]: I1006 12:04:26.965534 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="ceilometer-central-agent" containerID="cri-o://66aa7276da3dfc405f80396bfaed8a3b72dc3e145d6f6bda11a5e3d128440846" gracePeriod=30 Oct 06 12:04:26 crc kubenswrapper[4698]: I1006 12:04:26.965706 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="sg-core" containerID="cri-o://fedea8abb60bb1fa0b5eb640d9cdb5098a60d8a9898e38666fdc0cf2b58db996" gracePeriod=30 Oct 06 12:04:26 crc kubenswrapper[4698]: I1006 12:04:26.965787 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="ceilometer-notification-agent" containerID="cri-o://6d9135d7615114234ddd22f25832f22c5a4b3c1db59554aefb1312d956cfde77" gracePeriod=30 Oct 06 12:04:26 crc kubenswrapper[4698]: I1006 12:04:26.965976 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="proxy-httpd" containerID="cri-o://1aceb51e3bf9c9d97da734b7dfecdda0421aa05cb446c9ed75bc391983866f1d" gracePeriod=30 Oct 06 12:04:26 crc kubenswrapper[4698]: I1006 12:04:26.979792 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.184:3000/\": EOF" Oct 06 12:04:27 crc kubenswrapper[4698]: I1006 12:04:27.820726 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 06 12:04:28 crc kubenswrapper[4698]: I1006 12:04:28.084747 4698 generic.go:334] "Generic (PLEG): container finished" podID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerID="1aceb51e3bf9c9d97da734b7dfecdda0421aa05cb446c9ed75bc391983866f1d" exitCode=0 Oct 06 12:04:28 crc kubenswrapper[4698]: I1006 12:04:28.084783 4698 generic.go:334] "Generic (PLEG): container finished" podID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerID="fedea8abb60bb1fa0b5eb640d9cdb5098a60d8a9898e38666fdc0cf2b58db996" exitCode=2 Oct 06 12:04:28 crc kubenswrapper[4698]: I1006 12:04:28.084793 4698 generic.go:334] "Generic (PLEG): container finished" podID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerID="66aa7276da3dfc405f80396bfaed8a3b72dc3e145d6f6bda11a5e3d128440846" exitCode=0 Oct 06 12:04:28 crc kubenswrapper[4698]: I1006 12:04:28.084816 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0","Type":"ContainerDied","Data":"1aceb51e3bf9c9d97da734b7dfecdda0421aa05cb446c9ed75bc391983866f1d"} Oct 06 12:04:28 crc kubenswrapper[4698]: I1006 12:04:28.084842 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0","Type":"ContainerDied","Data":"fedea8abb60bb1fa0b5eb640d9cdb5098a60d8a9898e38666fdc0cf2b58db996"} Oct 06 12:04:28 crc kubenswrapper[4698]: I1006 12:04:28.084853 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0","Type":"ContainerDied","Data":"66aa7276da3dfc405f80396bfaed8a3b72dc3e145d6f6bda11a5e3d128440846"} Oct 06 12:04:28 crc kubenswrapper[4698]: I1006 12:04:28.821350 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:28 crc kubenswrapper[4698]: I1006 12:04:28.821595 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="69148131-5b31-411f-b3be-729ee6530f9f" containerName="glance-log" containerID="cri-o://86c5bb37c206b2dd1c0aa83d1b825d8a1c283fd9a75e783d8cc44f302f1cbf94" gracePeriod=30 Oct 06 12:04:28 crc kubenswrapper[4698]: I1006 12:04:28.821738 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="69148131-5b31-411f-b3be-729ee6530f9f" containerName="glance-httpd" containerID="cri-o://f584419d25b819f114823ad54571c894825e0ac230c6b2a162ba40bfd4286c8d" gracePeriod=30 Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.113863 4698 generic.go:334] "Generic (PLEG): container finished" podID="69148131-5b31-411f-b3be-729ee6530f9f" containerID="86c5bb37c206b2dd1c0aa83d1b825d8a1c283fd9a75e783d8cc44f302f1cbf94" exitCode=143 Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.113952 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148131-5b31-411f-b3be-729ee6530f9f","Type":"ContainerDied","Data":"86c5bb37c206b2dd1c0aa83d1b825d8a1c283fd9a75e783d8cc44f302f1cbf94"} Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.391984 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-zn9rc"] Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.393662 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zn9rc" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.404765 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zn9rc"] Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.456923 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n847\" (UniqueName: \"kubernetes.io/projected/9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900-kube-api-access-9n847\") pod \"nova-api-db-create-zn9rc\" (UID: \"9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900\") " pod="openstack/nova-api-db-create-zn9rc" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.488318 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rqmcv"] Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.489694 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rqmcv" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.497753 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rqmcv"] Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.564797 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8qxp\" (UniqueName: \"kubernetes.io/projected/d7456d1f-e120-4aa6-bfcc-720c02e5a645-kube-api-access-v8qxp\") pod \"nova-cell0-db-create-rqmcv\" (UID: \"d7456d1f-e120-4aa6-bfcc-720c02e5a645\") " pod="openstack/nova-cell0-db-create-rqmcv" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.564953 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n847\" (UniqueName: \"kubernetes.io/projected/9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900-kube-api-access-9n847\") pod \"nova-api-db-create-zn9rc\" (UID: \"9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900\") " pod="openstack/nova-api-db-create-zn9rc" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.586994 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n847\" (UniqueName: \"kubernetes.io/projected/9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900-kube-api-access-9n847\") pod \"nova-api-db-create-zn9rc\" (UID: \"9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900\") " pod="openstack/nova-api-db-create-zn9rc" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.675043 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8qxp\" (UniqueName: \"kubernetes.io/projected/d7456d1f-e120-4aa6-bfcc-720c02e5a645-kube-api-access-v8qxp\") pod \"nova-cell0-db-create-rqmcv\" (UID: \"d7456d1f-e120-4aa6-bfcc-720c02e5a645\") " pod="openstack/nova-cell0-db-create-rqmcv" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.704933 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dmtrj"] Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.718730 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dmtrj" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.725758 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8qxp\" (UniqueName: \"kubernetes.io/projected/d7456d1f-e120-4aa6-bfcc-720c02e5a645-kube-api-access-v8qxp\") pod \"nova-cell0-db-create-rqmcv\" (UID: \"d7456d1f-e120-4aa6-bfcc-720c02e5a645\") " pod="openstack/nova-cell0-db-create-rqmcv" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.726085 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zn9rc" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.746950 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dmtrj"] Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.779860 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvkrb\" (UniqueName: \"kubernetes.io/projected/00458aeb-06de-40c3-aa85-2c88c9cb4229-kube-api-access-fvkrb\") pod \"nova-cell1-db-create-dmtrj\" (UID: \"00458aeb-06de-40c3-aa85-2c88c9cb4229\") " pod="openstack/nova-cell1-db-create-dmtrj" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.809177 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rqmcv" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.837311 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.837721 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" containerName="glance-log" containerID="cri-o://6e2ce57338624151c8d74239a24c93aa48c6cb9252334928d07453d6b3f9a102" gracePeriod=30 Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.840215 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" containerName="glance-httpd" containerID="cri-o://173b48a8cad7a4c51e3bb8a8d166b5b86cb450c616657a5b4be1699c5283190f" gracePeriod=30 Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.883467 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvkrb\" (UniqueName: \"kubernetes.io/projected/00458aeb-06de-40c3-aa85-2c88c9cb4229-kube-api-access-fvkrb\") pod \"nova-cell1-db-create-dmtrj\" (UID: \"00458aeb-06de-40c3-aa85-2c88c9cb4229\") " pod="openstack/nova-cell1-db-create-dmtrj" Oct 06 12:04:29 crc kubenswrapper[4698]: I1006 12:04:29.900941 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvkrb\" (UniqueName: \"kubernetes.io/projected/00458aeb-06de-40c3-aa85-2c88c9cb4229-kube-api-access-fvkrb\") pod \"nova-cell1-db-create-dmtrj\" (UID: \"00458aeb-06de-40c3-aa85-2c88c9cb4229\") " pod="openstack/nova-cell1-db-create-dmtrj" Oct 06 12:04:30 crc kubenswrapper[4698]: I1006 12:04:30.126707 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dmtrj" Oct 06 12:04:30 crc kubenswrapper[4698]: I1006 12:04:30.137767 4698 generic.go:334] "Generic (PLEG): container finished" podID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" containerID="6e2ce57338624151c8d74239a24c93aa48c6cb9252334928d07453d6b3f9a102" exitCode=143 Oct 06 12:04:30 crc kubenswrapper[4698]: I1006 12:04:30.137832 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"29816275-45db-4e16-bdbc-c3a6a2f67a7e","Type":"ContainerDied","Data":"6e2ce57338624151c8d74239a24c93aa48c6cb9252334928d07453d6b3f9a102"} Oct 06 12:04:31 crc kubenswrapper[4698]: I1006 12:04:31.156258 4698 generic.go:334] "Generic (PLEG): container finished" podID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerID="6d9135d7615114234ddd22f25832f22c5a4b3c1db59554aefb1312d956cfde77" exitCode=0 Oct 06 12:04:31 crc kubenswrapper[4698]: I1006 12:04:31.156320 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0","Type":"ContainerDied","Data":"6d9135d7615114234ddd22f25832f22c5a4b3c1db59554aefb1312d956cfde77"} Oct 06 12:04:32 crc kubenswrapper[4698]: I1006 12:04:32.177821 4698 generic.go:334] "Generic (PLEG): container finished" podID="69148131-5b31-411f-b3be-729ee6530f9f" containerID="f584419d25b819f114823ad54571c894825e0ac230c6b2a162ba40bfd4286c8d" exitCode=0 Oct 06 12:04:32 crc kubenswrapper[4698]: I1006 12:04:32.178357 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148131-5b31-411f-b3be-729ee6530f9f","Type":"ContainerDied","Data":"f584419d25b819f114823ad54571c894825e0ac230c6b2a162ba40bfd4286c8d"} Oct 06 12:04:32 crc kubenswrapper[4698]: I1006 12:04:32.436332 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-654cf8498d-s5tdp" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.206562 4698 generic.go:334] "Generic (PLEG): container finished" podID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" containerID="173b48a8cad7a4c51e3bb8a8d166b5b86cb450c616657a5b4be1699c5283190f" exitCode=0 Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.206643 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"29816275-45db-4e16-bdbc-c3a6a2f67a7e","Type":"ContainerDied","Data":"173b48a8cad7a4c51e3bb8a8d166b5b86cb450c616657a5b4be1699c5283190f"} Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.294217 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="69148131-5b31-411f-b3be-729ee6530f9f" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": dial tcp 10.217.0.168:9292: connect: connection refused" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.294774 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="69148131-5b31-411f-b3be-729ee6530f9f" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": dial tcp 10.217.0.168:9292: connect: connection refused" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.616794 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.678565 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-log-httpd\") pod \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.679151 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9b8t\" (UniqueName: \"kubernetes.io/projected/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-kube-api-access-p9b8t\") pod \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.679417 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" (UID: "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.679448 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-config-data\") pod \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.680123 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-run-httpd\") pod \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.680443 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" (UID: "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.684481 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-scripts\") pod \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.684634 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-combined-ca-bundle\") pod \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.684860 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-sg-core-conf-yaml\") pod \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\" (UID: \"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.687264 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.687649 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.688279 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-scripts" (OuterVolumeSpecName: "scripts") pod "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" (UID: "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.688522 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-kube-api-access-p9b8t" (OuterVolumeSpecName: "kube-api-access-p9b8t") pod "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" (UID: "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0"). InnerVolumeSpecName "kube-api-access-p9b8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.730229 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.731368 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" (UID: "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.784739 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" (UID: "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.790478 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-public-tls-certs\") pod \"69148131-5b31-411f-b3be-729ee6530f9f\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.790554 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7mjg\" (UniqueName: \"kubernetes.io/projected/69148131-5b31-411f-b3be-729ee6530f9f-kube-api-access-h7mjg\") pod \"69148131-5b31-411f-b3be-729ee6530f9f\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.790734 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-scripts\") pod \"69148131-5b31-411f-b3be-729ee6530f9f\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.790880 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"69148131-5b31-411f-b3be-729ee6530f9f\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.790986 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-config-data\") pod \"69148131-5b31-411f-b3be-729ee6530f9f\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.791306 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-combined-ca-bundle\") pod \"69148131-5b31-411f-b3be-729ee6530f9f\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.791839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-logs\") pod \"69148131-5b31-411f-b3be-729ee6530f9f\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.791905 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-httpd-run\") pod \"69148131-5b31-411f-b3be-729ee6530f9f\" (UID: \"69148131-5b31-411f-b3be-729ee6530f9f\") " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.792677 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9b8t\" (UniqueName: \"kubernetes.io/projected/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-kube-api-access-p9b8t\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.792695 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.792708 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.792718 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.793094 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "69148131-5b31-411f-b3be-729ee6530f9f" (UID: "69148131-5b31-411f-b3be-729ee6530f9f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.793398 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-logs" (OuterVolumeSpecName: "logs") pod "69148131-5b31-411f-b3be-729ee6530f9f" (UID: "69148131-5b31-411f-b3be-729ee6530f9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.795596 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-scripts" (OuterVolumeSpecName: "scripts") pod "69148131-5b31-411f-b3be-729ee6530f9f" (UID: "69148131-5b31-411f-b3be-729ee6530f9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.798088 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "69148131-5b31-411f-b3be-729ee6530f9f" (UID: "69148131-5b31-411f-b3be-729ee6530f9f"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.805246 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69148131-5b31-411f-b3be-729ee6530f9f-kube-api-access-h7mjg" (OuterVolumeSpecName: "kube-api-access-h7mjg") pod "69148131-5b31-411f-b3be-729ee6530f9f" (UID: "69148131-5b31-411f-b3be-729ee6530f9f"). InnerVolumeSpecName "kube-api-access-h7mjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.826192 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69148131-5b31-411f-b3be-729ee6530f9f" (UID: "69148131-5b31-411f-b3be-729ee6530f9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.871037 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "69148131-5b31-411f-b3be-729ee6530f9f" (UID: "69148131-5b31-411f-b3be-729ee6530f9f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.871487 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-config-data" (OuterVolumeSpecName: "config-data") pod "69148131-5b31-411f-b3be-729ee6530f9f" (UID: "69148131-5b31-411f-b3be-729ee6530f9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.875930 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-config-data" (OuterVolumeSpecName: "config-data") pod "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" (UID: "d8dfbe96-cb65-4163-9410-9adfa1f0dfe0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.896567 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.896800 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.896860 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69148131-5b31-411f-b3be-729ee6530f9f-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.896918 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.896971 4698 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.897040 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7mjg\" (UniqueName: \"kubernetes.io/projected/69148131-5b31-411f-b3be-729ee6530f9f-kube-api-access-h7mjg\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.897320 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.897424 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.897498 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69148131-5b31-411f-b3be-729ee6530f9f-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:33 crc kubenswrapper[4698]: I1006 12:04:33.941678 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.016473 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.135222 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dmtrj"] Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.185210 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zn9rc"] Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.248280 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rqmcv"] Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.251292 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zn9rc" event={"ID":"9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900","Type":"ContainerStarted","Data":"2e37d19bf582ea6665d8aa8ee0515fdb18b1437eb070690b9a0799db66a6feab"} Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.260627 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8dfbe96-cb65-4163-9410-9adfa1f0dfe0","Type":"ContainerDied","Data":"eb0cadc440a44af9863b091e53246309e6f4ecee02e2d81f636e41e6f53b7b67"} Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.260676 4698 scope.go:117] "RemoveContainer" containerID="1aceb51e3bf9c9d97da734b7dfecdda0421aa05cb446c9ed75bc391983866f1d" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.260891 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.296937 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148131-5b31-411f-b3be-729ee6530f9f","Type":"ContainerDied","Data":"25f4f23b2a7862ec85d21359e3e90c8a7e93137da0da4f66a7449cf4ec772ce9"} Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.297068 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.316316 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"832ec6ae-a05c-4838-93d2-8957d3dcdc6a","Type":"ContainerStarted","Data":"3fd3f59442fc51736d9ff95303c29eec3c58842402d5141e481f5b9708aa5bed"} Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.322005 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dmtrj" event={"ID":"00458aeb-06de-40c3-aa85-2c88c9cb4229","Type":"ContainerStarted","Data":"82e9dcddbd6145bd7dde32877a656e9c464e0c007b068e118d00b408c9edc16a"} Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.391965 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.8258244919999997 podStartE2EDuration="16.391932491s" podCreationTimestamp="2025-10-06 12:04:18 +0000 UTC" firstStartedPulling="2025-10-06 12:04:19.642259801 +0000 UTC m=+1147.054951974" lastFinishedPulling="2025-10-06 12:04:33.2083678 +0000 UTC m=+1160.621059973" observedRunningTime="2025-10-06 12:04:34.353499593 +0000 UTC m=+1161.766191766" watchObservedRunningTime="2025-10-06 12:04:34.391932491 +0000 UTC m=+1161.804624704" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.434123 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.437938 4698 scope.go:117] "RemoveContainer" containerID="fedea8abb60bb1fa0b5eb640d9cdb5098a60d8a9898e38666fdc0cf2b58db996" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.449161 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.459361 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:34 crc kubenswrapper[4698]: E1006 12:04:34.459796 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="proxy-httpd" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.459808 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="proxy-httpd" Oct 06 12:04:34 crc kubenswrapper[4698]: E1006 12:04:34.459816 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69148131-5b31-411f-b3be-729ee6530f9f" containerName="glance-log" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.459822 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="69148131-5b31-411f-b3be-729ee6530f9f" containerName="glance-log" Oct 06 12:04:34 crc kubenswrapper[4698]: E1006 12:04:34.459837 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="sg-core" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.459844 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="sg-core" Oct 06 12:04:34 crc kubenswrapper[4698]: E1006 12:04:34.459857 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="ceilometer-notification-agent" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.459863 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="ceilometer-notification-agent" Oct 06 12:04:34 crc kubenswrapper[4698]: E1006 12:04:34.459885 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69148131-5b31-411f-b3be-729ee6530f9f" containerName="glance-httpd" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.459891 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="69148131-5b31-411f-b3be-729ee6530f9f" containerName="glance-httpd" Oct 06 12:04:34 crc kubenswrapper[4698]: E1006 12:04:34.459904 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="ceilometer-central-agent" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.459911 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="ceilometer-central-agent" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.460191 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="69148131-5b31-411f-b3be-729ee6530f9f" containerName="glance-log" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.460214 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="69148131-5b31-411f-b3be-729ee6530f9f" containerName="glance-httpd" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.460226 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="sg-core" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.460237 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="ceilometer-central-agent" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.460249 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="ceilometer-notification-agent" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.460261 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" containerName="proxy-httpd" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.462128 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.466517 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.466803 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.474775 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.487272 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.503106 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.511034 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.512682 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.518044 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.518288 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.524258 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.543554 4698 scope.go:117] "RemoveContainer" containerID="6d9135d7615114234ddd22f25832f22c5a4b3c1db59554aefb1312d956cfde77" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.627030 4698 scope.go:117] "RemoveContainer" containerID="66aa7276da3dfc405f80396bfaed8a3b72dc3e145d6f6bda11a5e3d128440846" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640195 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-run-httpd\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640252 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f518e2b-0a37-49eb-83f3-a393139e84c9-logs\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640274 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640289 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f518e2b-0a37-49eb-83f3-a393139e84c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640320 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640351 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5m4w\" (UniqueName: \"kubernetes.io/projected/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-kube-api-access-f5m4w\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640373 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tjw\" (UniqueName: \"kubernetes.io/projected/8f518e2b-0a37-49eb-83f3-a393139e84c9-kube-api-access-k8tjw\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640407 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-scripts\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640447 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640467 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640497 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-log-httpd\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640521 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640540 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.640558 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-config-data\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.659237 4698 scope.go:117] "RemoveContainer" containerID="f584419d25b819f114823ad54571c894825e0ac230c6b2a162ba40bfd4286c8d" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.684432 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.700680 4698 scope.go:117] "RemoveContainer" containerID="86c5bb37c206b2dd1c0aa83d1b825d8a1c283fd9a75e783d8cc44f302f1cbf94" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742166 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742221 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742245 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742284 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-log-httpd\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742311 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742353 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-config-data\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742386 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-run-httpd\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742422 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f518e2b-0a37-49eb-83f3-a393139e84c9-logs\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742439 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742454 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f518e2b-0a37-49eb-83f3-a393139e84c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742482 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742514 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5m4w\" (UniqueName: \"kubernetes.io/projected/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-kube-api-access-f5m4w\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742533 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8tjw\" (UniqueName: \"kubernetes.io/projected/8f518e2b-0a37-49eb-83f3-a393139e84c9-kube-api-access-k8tjw\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.742567 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-scripts\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.754379 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-run-httpd\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.754700 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f518e2b-0a37-49eb-83f3-a393139e84c9-logs\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.755524 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.759354 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-config-data\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.759556 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-log-httpd\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.760241 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.760493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f518e2b-0a37-49eb-83f3-a393139e84c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.762543 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.762955 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-scripts\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.766611 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.767746 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.772060 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.783499 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f518e2b-0a37-49eb-83f3-a393139e84c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.790370 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8tjw\" (UniqueName: \"kubernetes.io/projected/8f518e2b-0a37-49eb-83f3-a393139e84c9-kube-api-access-k8tjw\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.812590 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5m4w\" (UniqueName: \"kubernetes.io/projected/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-kube-api-access-f5m4w\") pod \"ceilometer-0\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.827386 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8f518e2b-0a37-49eb-83f3-a393139e84c9\") " pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.844202 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-config-data\") pod \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.844294 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r6s5\" (UniqueName: \"kubernetes.io/projected/29816275-45db-4e16-bdbc-c3a6a2f67a7e-kube-api-access-2r6s5\") pod \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.844376 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-logs\") pod \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.844413 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-combined-ca-bundle\") pod \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.844432 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-scripts\") pod \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.845431 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-logs" (OuterVolumeSpecName: "logs") pod "29816275-45db-4e16-bdbc-c3a6a2f67a7e" (UID: "29816275-45db-4e16-bdbc-c3a6a2f67a7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.845936 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-httpd-run\") pod \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.846085 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.846100 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.846129 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-internal-tls-certs\") pod \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\" (UID: \"29816275-45db-4e16-bdbc-c3a6a2f67a7e\") " Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.846995 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.849241 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29816275-45db-4e16-bdbc-c3a6a2f67a7e-kube-api-access-2r6s5" (OuterVolumeSpecName: "kube-api-access-2r6s5") pod "29816275-45db-4e16-bdbc-c3a6a2f67a7e" (UID: "29816275-45db-4e16-bdbc-c3a6a2f67a7e"). InnerVolumeSpecName "kube-api-access-2r6s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.849479 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "29816275-45db-4e16-bdbc-c3a6a2f67a7e" (UID: "29816275-45db-4e16-bdbc-c3a6a2f67a7e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.850628 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-scripts" (OuterVolumeSpecName: "scripts") pod "29816275-45db-4e16-bdbc-c3a6a2f67a7e" (UID: "29816275-45db-4e16-bdbc-c3a6a2f67a7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.852151 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "29816275-45db-4e16-bdbc-c3a6a2f67a7e" (UID: "29816275-45db-4e16-bdbc-c3a6a2f67a7e"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.889595 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.905785 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29816275-45db-4e16-bdbc-c3a6a2f67a7e" (UID: "29816275-45db-4e16-bdbc-c3a6a2f67a7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.919127 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-config-data" (OuterVolumeSpecName: "config-data") pod "29816275-45db-4e16-bdbc-c3a6a2f67a7e" (UID: "29816275-45db-4e16-bdbc-c3a6a2f67a7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.931129 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "29816275-45db-4e16-bdbc-c3a6a2f67a7e" (UID: "29816275-45db-4e16-bdbc-c3a6a2f67a7e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.949435 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29816275-45db-4e16-bdbc-c3a6a2f67a7e-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.949504 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.949523 4698 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.949533 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.949543 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r6s5\" (UniqueName: \"kubernetes.io/projected/29816275-45db-4e16-bdbc-c3a6a2f67a7e-kube-api-access-2r6s5\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.949552 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.949560 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29816275-45db-4e16-bdbc-c3a6a2f67a7e-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:34 crc kubenswrapper[4698]: I1006 12:04:34.969272 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.053159 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.212021 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7bd5b9f8ff-k9cfq"] Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.345221 4698 generic.go:334] "Generic (PLEG): container finished" podID="d7456d1f-e120-4aa6-bfcc-720c02e5a645" containerID="ec84410d5c45d2cb4e65d091fa3dea56bf42f509d9ca919f1655ed5b16a3c069" exitCode=0 Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.353520 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69148131-5b31-411f-b3be-729ee6530f9f" path="/var/lib/kubelet/pods/69148131-5b31-411f-b3be-729ee6530f9f/volumes" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.355166 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8dfbe96-cb65-4163-9410-9adfa1f0dfe0" path="/var/lib/kubelet/pods/d8dfbe96-cb65-4163-9410-9adfa1f0dfe0/volumes" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.356365 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rqmcv" event={"ID":"d7456d1f-e120-4aa6-bfcc-720c02e5a645","Type":"ContainerDied","Data":"ec84410d5c45d2cb4e65d091fa3dea56bf42f509d9ca919f1655ed5b16a3c069"} Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.356398 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rqmcv" event={"ID":"d7456d1f-e120-4aa6-bfcc-720c02e5a645","Type":"ContainerStarted","Data":"1538796db3ddd9df28d4773d4a5b574a66494259854813fc42b8038a4dad094f"} Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.360160 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" event={"ID":"6900b347-8ed3-4474-b6b1-623471b2a03f","Type":"ContainerStarted","Data":"af872cca843b91caa76c0453ae0644b1c0747e1041a566543bccb4803e0365c8"} Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.396525 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"29816275-45db-4e16-bdbc-c3a6a2f67a7e","Type":"ContainerDied","Data":"974db308e22dc26a356ba6e3a7824df5009e6d2ca1863889cd109c391af5e9ff"} Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.396583 4698 scope.go:117] "RemoveContainer" containerID="173b48a8cad7a4c51e3bb8a8d166b5b86cb450c616657a5b4be1699c5283190f" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.396740 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.407314 4698 generic.go:334] "Generic (PLEG): container finished" podID="00458aeb-06de-40c3-aa85-2c88c9cb4229" containerID="0067d8c56101b5252796f649d4ef33e2b005c0352bb330a242919c239efed96e" exitCode=0 Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.407457 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dmtrj" event={"ID":"00458aeb-06de-40c3-aa85-2c88c9cb4229","Type":"ContainerDied","Data":"0067d8c56101b5252796f649d4ef33e2b005c0352bb330a242919c239efed96e"} Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.426767 4698 generic.go:334] "Generic (PLEG): container finished" podID="9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900" containerID="fc8a876ec55afbcc25b8f1748c61fe85dc24811e1d5b8936e22f28f491fb4110" exitCode=0 Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.426975 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zn9rc" event={"ID":"9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900","Type":"ContainerDied","Data":"fc8a876ec55afbcc25b8f1748c61fe85dc24811e1d5b8936e22f28f491fb4110"} Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.442824 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.444177 4698 scope.go:117] "RemoveContainer" containerID="6e2ce57338624151c8d74239a24c93aa48c6cb9252334928d07453d6b3f9a102" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.458586 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.508589 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:35 crc kubenswrapper[4698]: E1006 12:04:35.509099 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" containerName="glance-httpd" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.509112 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" containerName="glance-httpd" Oct 06 12:04:35 crc kubenswrapper[4698]: E1006 12:04:35.509162 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" containerName="glance-log" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.509168 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" containerName="glance-log" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.509397 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" containerName="glance-httpd" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.509419 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" containerName="glance-log" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.510586 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.513970 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.536035 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.536523 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.606784 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.634241 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.706463 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afe62d1-9751-4c32-820b-770b71e5599f-logs\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.706528 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.706609 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.706654 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.706689 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.706715 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0afe62d1-9751-4c32-820b-770b71e5599f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.706851 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8r4w\" (UniqueName: \"kubernetes.io/projected/0afe62d1-9751-4c32-820b-770b71e5599f-kube-api-access-s8r4w\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.706893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.808940 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.809517 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.809567 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.809603 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0afe62d1-9751-4c32-820b-770b71e5599f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.809648 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8r4w\" (UniqueName: \"kubernetes.io/projected/0afe62d1-9751-4c32-820b-770b71e5599f-kube-api-access-s8r4w\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.809688 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.809747 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afe62d1-9751-4c32-820b-770b71e5599f-logs\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.809779 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.810407 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.810443 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0afe62d1-9751-4c32-820b-770b71e5599f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.810414 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afe62d1-9751-4c32-820b-770b71e5599f-logs\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.815466 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.815747 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.816073 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.817968 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afe62d1-9751-4c32-820b-770b71e5599f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.834386 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8r4w\" (UniqueName: \"kubernetes.io/projected/0afe62d1-9751-4c32-820b-770b71e5599f-kube-api-access-s8r4w\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.855883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0afe62d1-9751-4c32-820b-770b71e5599f\") " pod="openstack/glance-default-internal-api-0" Oct 06 12:04:35 crc kubenswrapper[4698]: I1006 12:04:35.882989 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:36 crc kubenswrapper[4698]: I1006 12:04:36.444098 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f518e2b-0a37-49eb-83f3-a393139e84c9","Type":"ContainerStarted","Data":"290ed6953560e0abf54ab2a796653a43aa3bfbbd4e7ac35921e0c24952cc7ea9"} Oct 06 12:04:36 crc kubenswrapper[4698]: I1006 12:04:36.444995 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f518e2b-0a37-49eb-83f3-a393139e84c9","Type":"ContainerStarted","Data":"efc97a5c141833b6718689b2d45820dc2543388c88fd862c9dd4966de27fdcda"} Oct 06 12:04:36 crc kubenswrapper[4698]: I1006 12:04:36.448176 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc","Type":"ContainerStarted","Data":"48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475"} Oct 06 12:04:36 crc kubenswrapper[4698]: I1006 12:04:36.448237 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc","Type":"ContainerStarted","Data":"9fa19bb4d9d3379d7ca350595d833831a400e8c5305652c669eb7d9cd28618c5"} Oct 06 12:04:36 crc kubenswrapper[4698]: I1006 12:04:36.450797 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" event={"ID":"6900b347-8ed3-4474-b6b1-623471b2a03f","Type":"ContainerStarted","Data":"1da98836a80727fbcf2c23393e052b33deaddecc12278a2c81d0d2800182a605"} Oct 06 12:04:36 crc kubenswrapper[4698]: I1006 12:04:36.450834 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" event={"ID":"6900b347-8ed3-4474-b6b1-623471b2a03f","Type":"ContainerStarted","Data":"00ea546d02c8592e37ebde3ffd2095b322b9771fa69cfcb6b77da396b253b7f7"} Oct 06 12:04:36 crc kubenswrapper[4698]: I1006 12:04:36.451567 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:36 crc kubenswrapper[4698]: I1006 12:04:36.485417 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" podStartSLOduration=11.485396048 podStartE2EDuration="11.485396048s" podCreationTimestamp="2025-10-06 12:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:36.48383008 +0000 UTC m=+1163.896522263" watchObservedRunningTime="2025-10-06 12:04:36.485396048 +0000 UTC m=+1163.898088221" Oct 06 12:04:36 crc kubenswrapper[4698]: I1006 12:04:36.612658 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.302662 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rqmcv" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.381678 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29816275-45db-4e16-bdbc-c3a6a2f67a7e" path="/var/lib/kubelet/pods/29816275-45db-4e16-bdbc-c3a6a2f67a7e/volumes" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.395962 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8qxp\" (UniqueName: \"kubernetes.io/projected/d7456d1f-e120-4aa6-bfcc-720c02e5a645-kube-api-access-v8qxp\") pod \"d7456d1f-e120-4aa6-bfcc-720c02e5a645\" (UID: \"d7456d1f-e120-4aa6-bfcc-720c02e5a645\") " Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.403808 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zn9rc" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.408612 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7456d1f-e120-4aa6-bfcc-720c02e5a645-kube-api-access-v8qxp" (OuterVolumeSpecName: "kube-api-access-v8qxp") pod "d7456d1f-e120-4aa6-bfcc-720c02e5a645" (UID: "d7456d1f-e120-4aa6-bfcc-720c02e5a645"). InnerVolumeSpecName "kube-api-access-v8qxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.424674 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dmtrj" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.502112 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n847\" (UniqueName: \"kubernetes.io/projected/9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900-kube-api-access-9n847\") pod \"9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900\" (UID: \"9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900\") " Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.502170 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvkrb\" (UniqueName: \"kubernetes.io/projected/00458aeb-06de-40c3-aa85-2c88c9cb4229-kube-api-access-fvkrb\") pod \"00458aeb-06de-40c3-aa85-2c88c9cb4229\" (UID: \"00458aeb-06de-40c3-aa85-2c88c9cb4229\") " Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.502906 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rqmcv" event={"ID":"d7456d1f-e120-4aa6-bfcc-720c02e5a645","Type":"ContainerDied","Data":"1538796db3ddd9df28d4773d4a5b574a66494259854813fc42b8038a4dad094f"} Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.502945 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1538796db3ddd9df28d4773d4a5b574a66494259854813fc42b8038a4dad094f" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.503082 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rqmcv" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.503148 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8qxp\" (UniqueName: \"kubernetes.io/projected/d7456d1f-e120-4aa6-bfcc-720c02e5a645-kube-api-access-v8qxp\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.508717 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00458aeb-06de-40c3-aa85-2c88c9cb4229-kube-api-access-fvkrb" (OuterVolumeSpecName: "kube-api-access-fvkrb") pod "00458aeb-06de-40c3-aa85-2c88c9cb4229" (UID: "00458aeb-06de-40c3-aa85-2c88c9cb4229"). InnerVolumeSpecName "kube-api-access-fvkrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.508835 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900-kube-api-access-9n847" (OuterVolumeSpecName: "kube-api-access-9n847") pod "9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900" (UID: "9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900"). InnerVolumeSpecName "kube-api-access-9n847". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.520649 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0afe62d1-9751-4c32-820b-770b71e5599f","Type":"ContainerStarted","Data":"a738eb40f3b18bd399b775950283aafef8c01024173a1131d9964af8defcf0c3"} Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.525929 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dmtrj" event={"ID":"00458aeb-06de-40c3-aa85-2c88c9cb4229","Type":"ContainerDied","Data":"82e9dcddbd6145bd7dde32877a656e9c464e0c007b068e118d00b408c9edc16a"} Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.525972 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82e9dcddbd6145bd7dde32877a656e9c464e0c007b068e118d00b408c9edc16a" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.526041 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dmtrj" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.546220 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zn9rc" event={"ID":"9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900","Type":"ContainerDied","Data":"2e37d19bf582ea6665d8aa8ee0515fdb18b1437eb070690b9a0799db66a6feab"} Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.546277 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e37d19bf582ea6665d8aa8ee0515fdb18b1437eb070690b9a0799db66a6feab" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.546280 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zn9rc" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.546310 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.606394 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n847\" (UniqueName: \"kubernetes.io/projected/9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900-kube-api-access-9n847\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:37 crc kubenswrapper[4698]: I1006 12:04:37.606751 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvkrb\" (UniqueName: \"kubernetes.io/projected/00458aeb-06de-40c3-aa85-2c88c9cb4229-kube-api-access-fvkrb\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:38 crc kubenswrapper[4698]: I1006 12:04:38.570124 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0afe62d1-9751-4c32-820b-770b71e5599f","Type":"ContainerStarted","Data":"24b454388c2bb2d24b7f95025100a208dde79b7fea035f88b56617d2e3a36d74"} Oct 06 12:04:38 crc kubenswrapper[4698]: I1006 12:04:38.570894 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0afe62d1-9751-4c32-820b-770b71e5599f","Type":"ContainerStarted","Data":"8afe9695985d3a5ca945734e5ef03077b5e80cf1d43b4c196d3c8fbbb268f376"} Oct 06 12:04:38 crc kubenswrapper[4698]: I1006 12:04:38.572440 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f518e2b-0a37-49eb-83f3-a393139e84c9","Type":"ContainerStarted","Data":"4e2cb5c309c911397c468ef415dac9c238e3c129b193548a8560b440e2ea35ff"} Oct 06 12:04:38 crc kubenswrapper[4698]: I1006 12:04:38.589349 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc","Type":"ContainerStarted","Data":"a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b"} Oct 06 12:04:38 crc kubenswrapper[4698]: I1006 12:04:38.589475 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc","Type":"ContainerStarted","Data":"1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5"} Oct 06 12:04:38 crc kubenswrapper[4698]: I1006 12:04:38.600625 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.600603457 podStartE2EDuration="3.600603457s" podCreationTimestamp="2025-10-06 12:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:38.59418104 +0000 UTC m=+1166.006873213" watchObservedRunningTime="2025-10-06 12:04:38.600603457 +0000 UTC m=+1166.013295630" Oct 06 12:04:38 crc kubenswrapper[4698]: I1006 12:04:38.628712 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.628695193 podStartE2EDuration="4.628695193s" podCreationTimestamp="2025-10-06 12:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:38.627548945 +0000 UTC m=+1166.040241118" watchObservedRunningTime="2025-10-06 12:04:38.628695193 +0000 UTC m=+1166.041387366" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.583870 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1742-account-create-d49wh"] Oct 06 12:04:39 crc kubenswrapper[4698]: E1006 12:04:39.584574 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7456d1f-e120-4aa6-bfcc-720c02e5a645" containerName="mariadb-database-create" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.584587 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7456d1f-e120-4aa6-bfcc-720c02e5a645" containerName="mariadb-database-create" Oct 06 12:04:39 crc kubenswrapper[4698]: E1006 12:04:39.584603 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900" containerName="mariadb-database-create" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.584611 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900" containerName="mariadb-database-create" Oct 06 12:04:39 crc kubenswrapper[4698]: E1006 12:04:39.584623 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00458aeb-06de-40c3-aa85-2c88c9cb4229" containerName="mariadb-database-create" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.584629 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="00458aeb-06de-40c3-aa85-2c88c9cb4229" containerName="mariadb-database-create" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.584835 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900" containerName="mariadb-database-create" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.584853 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="00458aeb-06de-40c3-aa85-2c88c9cb4229" containerName="mariadb-database-create" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.584865 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7456d1f-e120-4aa6-bfcc-720c02e5a645" containerName="mariadb-database-create" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.585585 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1742-account-create-d49wh" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.589709 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.631385 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1742-account-create-d49wh"] Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.666703 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcswd\" (UniqueName: \"kubernetes.io/projected/2fe1856c-014a-49a4-b3df-586640603de9-kube-api-access-qcswd\") pod \"nova-api-1742-account-create-d49wh\" (UID: \"2fe1856c-014a-49a4-b3df-586640603de9\") " pod="openstack/nova-api-1742-account-create-d49wh" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.774312 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcswd\" (UniqueName: \"kubernetes.io/projected/2fe1856c-014a-49a4-b3df-586640603de9-kube-api-access-qcswd\") pod \"nova-api-1742-account-create-d49wh\" (UID: \"2fe1856c-014a-49a4-b3df-586640603de9\") " pod="openstack/nova-api-1742-account-create-d49wh" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.797448 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcswd\" (UniqueName: \"kubernetes.io/projected/2fe1856c-014a-49a4-b3df-586640603de9-kube-api-access-qcswd\") pod \"nova-api-1742-account-create-d49wh\" (UID: \"2fe1856c-014a-49a4-b3df-586640603de9\") " pod="openstack/nova-api-1742-account-create-d49wh" Oct 06 12:04:39 crc kubenswrapper[4698]: I1006 12:04:39.916441 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1742-account-create-d49wh" Oct 06 12:04:40 crc kubenswrapper[4698]: I1006 12:04:40.466872 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1742-account-create-d49wh"] Oct 06 12:04:40 crc kubenswrapper[4698]: I1006 12:04:40.632381 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1742-account-create-d49wh" event={"ID":"2fe1856c-014a-49a4-b3df-586640603de9","Type":"ContainerStarted","Data":"a4d1e9d900157d686ae60792bdf6b0aebdf094001f6a870af039d1af3a945c63"} Oct 06 12:04:40 crc kubenswrapper[4698]: I1006 12:04:40.635874 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc","Type":"ContainerStarted","Data":"94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa"} Oct 06 12:04:40 crc kubenswrapper[4698]: I1006 12:04:40.636161 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:04:40 crc kubenswrapper[4698]: I1006 12:04:40.663508 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.354854227 podStartE2EDuration="6.663488326s" podCreationTimestamp="2025-10-06 12:04:34 +0000 UTC" firstStartedPulling="2025-10-06 12:04:35.591007452 +0000 UTC m=+1163.003699625" lastFinishedPulling="2025-10-06 12:04:39.899641551 +0000 UTC m=+1167.312333724" observedRunningTime="2025-10-06 12:04:40.661761374 +0000 UTC m=+1168.074453567" watchObservedRunningTime="2025-10-06 12:04:40.663488326 +0000 UTC m=+1168.076180499" Oct 06 12:04:40 crc kubenswrapper[4698]: I1006 12:04:40.902226 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:40 crc kubenswrapper[4698]: I1006 12:04:40.903985 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7bd5b9f8ff-k9cfq" Oct 06 12:04:41 crc kubenswrapper[4698]: I1006 12:04:41.650851 4698 generic.go:334] "Generic (PLEG): container finished" podID="2fe1856c-014a-49a4-b3df-586640603de9" containerID="42de3eeaae5bb0142991acdb24c4a7049690d16b2c90435d1632671ff42bb7f1" exitCode=0 Oct 06 12:04:41 crc kubenswrapper[4698]: I1006 12:04:41.651030 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1742-account-create-d49wh" event={"ID":"2fe1856c-014a-49a4-b3df-586640603de9","Type":"ContainerDied","Data":"42de3eeaae5bb0142991acdb24c4a7049690d16b2c90435d1632671ff42bb7f1"} Oct 06 12:04:42 crc kubenswrapper[4698]: I1006 12:04:42.434320 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-654cf8498d-s5tdp" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Oct 06 12:04:42 crc kubenswrapper[4698]: I1006 12:04:42.435190 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:04:43 crc kubenswrapper[4698]: I1006 12:04:43.088643 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1742-account-create-d49wh" Oct 06 12:04:43 crc kubenswrapper[4698]: I1006 12:04:43.264078 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcswd\" (UniqueName: \"kubernetes.io/projected/2fe1856c-014a-49a4-b3df-586640603de9-kube-api-access-qcswd\") pod \"2fe1856c-014a-49a4-b3df-586640603de9\" (UID: \"2fe1856c-014a-49a4-b3df-586640603de9\") " Oct 06 12:04:43 crc kubenswrapper[4698]: I1006 12:04:43.270966 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe1856c-014a-49a4-b3df-586640603de9-kube-api-access-qcswd" (OuterVolumeSpecName: "kube-api-access-qcswd") pod "2fe1856c-014a-49a4-b3df-586640603de9" (UID: "2fe1856c-014a-49a4-b3df-586640603de9"). InnerVolumeSpecName "kube-api-access-qcswd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:43 crc kubenswrapper[4698]: I1006 12:04:43.374192 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcswd\" (UniqueName: \"kubernetes.io/projected/2fe1856c-014a-49a4-b3df-586640603de9-kube-api-access-qcswd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:43 crc kubenswrapper[4698]: I1006 12:04:43.691652 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1742-account-create-d49wh" event={"ID":"2fe1856c-014a-49a4-b3df-586640603de9","Type":"ContainerDied","Data":"a4d1e9d900157d686ae60792bdf6b0aebdf094001f6a870af039d1af3a945c63"} Oct 06 12:04:43 crc kubenswrapper[4698]: I1006 12:04:43.692097 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4d1e9d900157d686ae60792bdf6b0aebdf094001f6a870af039d1af3a945c63" Oct 06 12:04:43 crc kubenswrapper[4698]: I1006 12:04:43.692086 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1742-account-create-d49wh" Oct 06 12:04:44 crc kubenswrapper[4698]: I1006 12:04:44.890336 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 12:04:44 crc kubenswrapper[4698]: I1006 12:04:44.891632 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 12:04:44 crc kubenswrapper[4698]: I1006 12:04:44.945231 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 12:04:44 crc kubenswrapper[4698]: I1006 12:04:44.951601 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.451759 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.452123 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="ceilometer-central-agent" containerID="cri-o://48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475" gracePeriod=30 Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.452232 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="proxy-httpd" containerID="cri-o://94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa" gracePeriod=30 Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.452272 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="ceilometer-notification-agent" containerID="cri-o://1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5" gracePeriod=30 Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.452285 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="sg-core" containerID="cri-o://a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b" gracePeriod=30 Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.714653 4698 generic.go:334] "Generic (PLEG): container finished" podID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerID="94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa" exitCode=0 Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.714986 4698 generic.go:334] "Generic (PLEG): container finished" podID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerID="a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b" exitCode=2 Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.714973 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc","Type":"ContainerDied","Data":"94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa"} Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.715102 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc","Type":"ContainerDied","Data":"a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b"} Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.715750 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.715939 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 12:04:45 crc kubenswrapper[4698]: E1006 12:04:45.749676 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dfaac3a_1e3e_444d_b86e_4f34383a3fcc.slice/crio-a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dfaac3a_1e3e_444d_b86e_4f34383a3fcc.slice/crio-94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dfaac3a_1e3e_444d_b86e_4f34383a3fcc.slice/crio-conmon-a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.885259 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.886843 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.930657 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:45 crc kubenswrapper[4698]: I1006 12:04:45.951670 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.350366 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.444023 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-log-httpd\") pod \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.444134 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-sg-core-conf-yaml\") pod \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.444173 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5m4w\" (UniqueName: \"kubernetes.io/projected/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-kube-api-access-f5m4w\") pod \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.444197 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-combined-ca-bundle\") pod \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.444228 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-config-data\") pod \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.444309 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-run-httpd\") pod \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.444329 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-scripts\") pod \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\" (UID: \"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc\") " Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.444767 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" (UID: "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.444812 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" (UID: "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.454259 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-kube-api-access-f5m4w" (OuterVolumeSpecName: "kube-api-access-f5m4w") pod "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" (UID: "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc"). InnerVolumeSpecName "kube-api-access-f5m4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.454424 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-scripts" (OuterVolumeSpecName: "scripts") pod "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" (UID: "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.478604 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" (UID: "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.546672 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.546703 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.546717 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5m4w\" (UniqueName: \"kubernetes.io/projected/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-kube-api-access-f5m4w\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.546725 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.546734 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.546935 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" (UID: "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.581127 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-config-data" (OuterVolumeSpecName: "config-data") pod "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" (UID: "2dfaac3a-1e3e-444d-b86e-4f34383a3fcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.648494 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.648539 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.728208 4698 generic.go:334] "Generic (PLEG): container finished" podID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerID="1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5" exitCode=0 Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.728249 4698 generic.go:334] "Generic (PLEG): container finished" podID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerID="48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475" exitCode=0 Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.728302 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc","Type":"ContainerDied","Data":"1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5"} Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.728377 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc","Type":"ContainerDied","Data":"48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475"} Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.728397 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dfaac3a-1e3e-444d-b86e-4f34383a3fcc","Type":"ContainerDied","Data":"9fa19bb4d9d3379d7ca350595d833831a400e8c5305652c669eb7d9cd28618c5"} Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.728328 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.728423 4698 scope.go:117] "RemoveContainer" containerID="94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.729537 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.729578 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.777090 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.783231 4698 scope.go:117] "RemoveContainer" containerID="a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.797615 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.818196 4698 scope.go:117] "RemoveContainer" containerID="1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.818357 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:46 crc kubenswrapper[4698]: E1006 12:04:46.818825 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="ceilometer-central-agent" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.818839 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="ceilometer-central-agent" Oct 06 12:04:46 crc kubenswrapper[4698]: E1006 12:04:46.818855 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe1856c-014a-49a4-b3df-586640603de9" containerName="mariadb-account-create" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.818862 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe1856c-014a-49a4-b3df-586640603de9" containerName="mariadb-account-create" Oct 06 12:04:46 crc kubenswrapper[4698]: E1006 12:04:46.818875 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="proxy-httpd" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.818880 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="proxy-httpd" Oct 06 12:04:46 crc kubenswrapper[4698]: E1006 12:04:46.818899 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="ceilometer-notification-agent" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.818905 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="ceilometer-notification-agent" Oct 06 12:04:46 crc kubenswrapper[4698]: E1006 12:04:46.818922 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="sg-core" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.818928 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="sg-core" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.819134 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="sg-core" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.819147 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="ceilometer-central-agent" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.819154 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="ceilometer-notification-agent" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.819163 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" containerName="proxy-httpd" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.819184 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe1856c-014a-49a4-b3df-586640603de9" containerName="mariadb-account-create" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.820955 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.821123 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.867739 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.869765 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.912330 4698 scope.go:117] "RemoveContainer" containerID="48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.936126 4698 scope.go:117] "RemoveContainer" containerID="94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa" Oct 06 12:04:46 crc kubenswrapper[4698]: E1006 12:04:46.937524 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa\": container with ID starting with 94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa not found: ID does not exist" containerID="94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.937585 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa"} err="failed to get container status \"94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa\": rpc error: code = NotFound desc = could not find container \"94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa\": container with ID starting with 94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa not found: ID does not exist" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.937623 4698 scope.go:117] "RemoveContainer" containerID="a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b" Oct 06 12:04:46 crc kubenswrapper[4698]: E1006 12:04:46.938035 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b\": container with ID starting with a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b not found: ID does not exist" containerID="a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.938069 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b"} err="failed to get container status \"a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b\": rpc error: code = NotFound desc = could not find container \"a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b\": container with ID starting with a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b not found: ID does not exist" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.938090 4698 scope.go:117] "RemoveContainer" containerID="1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5" Oct 06 12:04:46 crc kubenswrapper[4698]: E1006 12:04:46.938293 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5\": container with ID starting with 1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5 not found: ID does not exist" containerID="1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.938312 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5"} err="failed to get container status \"1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5\": rpc error: code = NotFound desc = could not find container \"1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5\": container with ID starting with 1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5 not found: ID does not exist" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.938328 4698 scope.go:117] "RemoveContainer" containerID="48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475" Oct 06 12:04:46 crc kubenswrapper[4698]: E1006 12:04:46.938662 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475\": container with ID starting with 48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475 not found: ID does not exist" containerID="48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.938682 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475"} err="failed to get container status \"48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475\": rpc error: code = NotFound desc = could not find container \"48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475\": container with ID starting with 48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475 not found: ID does not exist" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.938695 4698 scope.go:117] "RemoveContainer" containerID="94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.938968 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa"} err="failed to get container status \"94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa\": rpc error: code = NotFound desc = could not find container \"94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa\": container with ID starting with 94047b6507c274eeab0d89f81a9695bfb0abb2c501cb15451ad53536e7b216fa not found: ID does not exist" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.938988 4698 scope.go:117] "RemoveContainer" containerID="a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.939256 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b"} err="failed to get container status \"a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b\": rpc error: code = NotFound desc = could not find container \"a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b\": container with ID starting with a47651bb9656311878bbc22b99426e56cae5da58af6db02e18a207662875f68b not found: ID does not exist" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.939274 4698 scope.go:117] "RemoveContainer" containerID="1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.939471 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5"} err="failed to get container status \"1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5\": rpc error: code = NotFound desc = could not find container \"1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5\": container with ID starting with 1bfc461ea15d6942b7d3cecce7f31044508bcae87d3d3fe1355d2d110e754af5 not found: ID does not exist" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.939492 4698 scope.go:117] "RemoveContainer" containerID="48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.939683 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475"} err="failed to get container status \"48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475\": rpc error: code = NotFound desc = could not find container \"48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475\": container with ID starting with 48f77aa2abaacc64dce9654584fa9f467720a876f85fd6d4a3fbc2e2ac670475 not found: ID does not exist" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.963510 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.963958 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlb79\" (UniqueName: \"kubernetes.io/projected/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-kube-api-access-dlb79\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.964001 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-config-data\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.964096 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-log-httpd\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.964127 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-run-httpd\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.964220 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:46 crc kubenswrapper[4698]: I1006 12:04:46.964278 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-scripts\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.066888 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-scripts\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.066998 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.067064 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlb79\" (UniqueName: \"kubernetes.io/projected/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-kube-api-access-dlb79\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.067108 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-config-data\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.067156 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-log-httpd\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.067190 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-run-httpd\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.067307 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.068662 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-run-httpd\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.069237 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-log-httpd\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.071549 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-scripts\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.072522 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.074533 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.087035 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-config-data\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.088818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlb79\" (UniqueName: \"kubernetes.io/projected/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-kube-api-access-dlb79\") pod \"ceilometer-0\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.189243 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.350753 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dfaac3a-1e3e-444d-b86e-4f34383a3fcc" path="/var/lib/kubelet/pods/2dfaac3a-1e3e-444d-b86e-4f34383a3fcc/volumes" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.718917 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.743345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f","Type":"ContainerStarted","Data":"3247e657a6adf20819464d132ae94393965b75dd3338b47e4ff5c3231fb07fbb"} Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.745787 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:04:47 crc kubenswrapper[4698]: I1006 12:04:47.745806 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:04:48 crc kubenswrapper[4698]: I1006 12:04:48.004511 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 12:04:48 crc kubenswrapper[4698]: I1006 12:04:48.005548 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 12:04:48 crc kubenswrapper[4698]: I1006 12:04:48.755961 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f","Type":"ContainerStarted","Data":"ed11c13f4112e1cd5f1416fac9822c1a0ee3d376548899053f1591095625ebae"} Oct 06 12:04:48 crc kubenswrapper[4698]: I1006 12:04:48.853893 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:48 crc kubenswrapper[4698]: I1006 12:04:48.854082 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.149139 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.764007 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5041-account-create-f2wjp"] Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.765644 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5041-account-create-f2wjp" Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.768122 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.770145 4698 generic.go:334] "Generic (PLEG): container finished" podID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerID="5fd6a5bd12a195df40342fdf1541abf8c63ee4e04bcc6f0c8b900c346830590c" exitCode=137 Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.770939 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-654cf8498d-s5tdp" event={"ID":"18ae0d1c-2545-4122-b2d9-3380fd017840","Type":"ContainerDied","Data":"5fd6a5bd12a195df40342fdf1541abf8c63ee4e04bcc6f0c8b900c346830590c"} Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.795509 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5041-account-create-f2wjp"] Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.868644 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdcp\" (UniqueName: \"kubernetes.io/projected/87f9e125-7190-4895-ac10-94ad7e66fea2-kube-api-access-rtdcp\") pod \"nova-cell0-5041-account-create-f2wjp\" (UID: \"87f9e125-7190-4895-ac10-94ad7e66fea2\") " pod="openstack/nova-cell0-5041-account-create-f2wjp" Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.970783 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdcp\" (UniqueName: \"kubernetes.io/projected/87f9e125-7190-4895-ac10-94ad7e66fea2-kube-api-access-rtdcp\") pod \"nova-cell0-5041-account-create-f2wjp\" (UID: \"87f9e125-7190-4895-ac10-94ad7e66fea2\") " pod="openstack/nova-cell0-5041-account-create-f2wjp" Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.973493 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ff0e-account-create-mvz4n"] Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.975343 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ff0e-account-create-mvz4n" Oct 06 12:04:49 crc kubenswrapper[4698]: I1006 12:04:49.983305 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.005252 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ff0e-account-create-mvz4n"] Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.013144 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdcp\" (UniqueName: \"kubernetes.io/projected/87f9e125-7190-4895-ac10-94ad7e66fea2-kube-api-access-rtdcp\") pod \"nova-cell0-5041-account-create-f2wjp\" (UID: \"87f9e125-7190-4895-ac10-94ad7e66fea2\") " pod="openstack/nova-cell0-5041-account-create-f2wjp" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.018123 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.018418 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="b760929b-e89c-4e54-8506-e6a61a100d84" containerName="watcher-decision-engine" containerID="cri-o://cf434a5fd4b064e31ddd2d029b4719de2f7d3a0251f98dec49f01db760bfd68c" gracePeriod=30 Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.075418 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5qr5\" (UniqueName: \"kubernetes.io/projected/f9a6d350-1567-4fda-bb0b-f7091ccf8bbc-kube-api-access-t5qr5\") pod \"nova-cell1-ff0e-account-create-mvz4n\" (UID: \"f9a6d350-1567-4fda-bb0b-f7091ccf8bbc\") " pod="openstack/nova-cell1-ff0e-account-create-mvz4n" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.089175 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5041-account-create-f2wjp" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.178532 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5qr5\" (UniqueName: \"kubernetes.io/projected/f9a6d350-1567-4fda-bb0b-f7091ccf8bbc-kube-api-access-t5qr5\") pod \"nova-cell1-ff0e-account-create-mvz4n\" (UID: \"f9a6d350-1567-4fda-bb0b-f7091ccf8bbc\") " pod="openstack/nova-cell1-ff0e-account-create-mvz4n" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.201000 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5qr5\" (UniqueName: \"kubernetes.io/projected/f9a6d350-1567-4fda-bb0b-f7091ccf8bbc-kube-api-access-t5qr5\") pod \"nova-cell1-ff0e-account-create-mvz4n\" (UID: \"f9a6d350-1567-4fda-bb0b-f7091ccf8bbc\") " pod="openstack/nova-cell1-ff0e-account-create-mvz4n" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.366842 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ff0e-account-create-mvz4n" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.593111 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.701579 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-scripts\") pod \"18ae0d1c-2545-4122-b2d9-3380fd017840\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.701789 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-combined-ca-bundle\") pod \"18ae0d1c-2545-4122-b2d9-3380fd017840\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.702040 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ae0d1c-2545-4122-b2d9-3380fd017840-logs\") pod \"18ae0d1c-2545-4122-b2d9-3380fd017840\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.702078 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-secret-key\") pod \"18ae0d1c-2545-4122-b2d9-3380fd017840\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.702255 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-config-data\") pod \"18ae0d1c-2545-4122-b2d9-3380fd017840\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.702331 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lzkl\" (UniqueName: \"kubernetes.io/projected/18ae0d1c-2545-4122-b2d9-3380fd017840-kube-api-access-6lzkl\") pod \"18ae0d1c-2545-4122-b2d9-3380fd017840\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.702383 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-tls-certs\") pod \"18ae0d1c-2545-4122-b2d9-3380fd017840\" (UID: \"18ae0d1c-2545-4122-b2d9-3380fd017840\") " Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.702980 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18ae0d1c-2545-4122-b2d9-3380fd017840-logs" (OuterVolumeSpecName: "logs") pod "18ae0d1c-2545-4122-b2d9-3380fd017840" (UID: "18ae0d1c-2545-4122-b2d9-3380fd017840"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.712944 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ae0d1c-2545-4122-b2d9-3380fd017840-kube-api-access-6lzkl" (OuterVolumeSpecName: "kube-api-access-6lzkl") pod "18ae0d1c-2545-4122-b2d9-3380fd017840" (UID: "18ae0d1c-2545-4122-b2d9-3380fd017840"). InnerVolumeSpecName "kube-api-access-6lzkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.713218 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "18ae0d1c-2545-4122-b2d9-3380fd017840" (UID: "18ae0d1c-2545-4122-b2d9-3380fd017840"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.750435 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5041-account-create-f2wjp"] Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.760658 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18ae0d1c-2545-4122-b2d9-3380fd017840" (UID: "18ae0d1c-2545-4122-b2d9-3380fd017840"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.766350 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-config-data" (OuterVolumeSpecName: "config-data") pod "18ae0d1c-2545-4122-b2d9-3380fd017840" (UID: "18ae0d1c-2545-4122-b2d9-3380fd017840"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.779832 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-scripts" (OuterVolumeSpecName: "scripts") pod "18ae0d1c-2545-4122-b2d9-3380fd017840" (UID: "18ae0d1c-2545-4122-b2d9-3380fd017840"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.790526 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f","Type":"ContainerStarted","Data":"61c75477527aaa23b7cf181d2ef5c70bb32206b6b55461a1b424f70ac72de299"} Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.796196 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5041-account-create-f2wjp" event={"ID":"87f9e125-7190-4895-ac10-94ad7e66fea2","Type":"ContainerStarted","Data":"79388c6a908524ecf3b7ed71e6ca0294153df764f5eaa92dca353b3d908efb61"} Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.801970 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-654cf8498d-s5tdp" event={"ID":"18ae0d1c-2545-4122-b2d9-3380fd017840","Type":"ContainerDied","Data":"5ac0d95b1e8ccf90bffb375c338d1b3280984180b4be219e5946e1113869d440"} Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.802092 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-654cf8498d-s5tdp" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.802100 4698 scope.go:117] "RemoveContainer" containerID="6ac119b788d458fe7ac1c4ff4e0504c0c1bcf69720b6056366a73205c669289d" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.805225 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ae0d1c-2545-4122-b2d9-3380fd017840-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.805252 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.805262 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.805273 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lzkl\" (UniqueName: \"kubernetes.io/projected/18ae0d1c-2545-4122-b2d9-3380fd017840-kube-api-access-6lzkl\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.805282 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18ae0d1c-2545-4122-b2d9-3380fd017840-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.805290 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.813303 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "18ae0d1c-2545-4122-b2d9-3380fd017840" (UID: "18ae0d1c-2545-4122-b2d9-3380fd017840"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.880902 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.907626 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ae0d1c-2545-4122-b2d9-3380fd017840-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:50 crc kubenswrapper[4698]: I1006 12:04:50.950154 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ff0e-account-create-mvz4n"] Oct 06 12:04:51 crc kubenswrapper[4698]: I1006 12:04:51.031413 4698 scope.go:117] "RemoveContainer" containerID="5fd6a5bd12a195df40342fdf1541abf8c63ee4e04bcc6f0c8b900c346830590c" Oct 06 12:04:51 crc kubenswrapper[4698]: I1006 12:04:51.162654 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-654cf8498d-s5tdp"] Oct 06 12:04:51 crc kubenswrapper[4698]: I1006 12:04:51.182561 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-654cf8498d-s5tdp"] Oct 06 12:04:51 crc kubenswrapper[4698]: I1006 12:04:51.364102 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" path="/var/lib/kubelet/pods/18ae0d1c-2545-4122-b2d9-3380fd017840/volumes" Oct 06 12:04:51 crc kubenswrapper[4698]: I1006 12:04:51.835377 4698 generic.go:334] "Generic (PLEG): container finished" podID="f9a6d350-1567-4fda-bb0b-f7091ccf8bbc" containerID="965ab0daa78e3030ce59e3cc374ec6c5b2bc8745327fcb25ed70b656de5052a7" exitCode=0 Oct 06 12:04:51 crc kubenswrapper[4698]: I1006 12:04:51.835545 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ff0e-account-create-mvz4n" event={"ID":"f9a6d350-1567-4fda-bb0b-f7091ccf8bbc","Type":"ContainerDied","Data":"965ab0daa78e3030ce59e3cc374ec6c5b2bc8745327fcb25ed70b656de5052a7"} Oct 06 12:04:51 crc kubenswrapper[4698]: I1006 12:04:51.835581 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ff0e-account-create-mvz4n" event={"ID":"f9a6d350-1567-4fda-bb0b-f7091ccf8bbc","Type":"ContainerStarted","Data":"2ff081b0a75131dea9c06daaa716068c40a37476cdf025d1eb5ac2160b79de6e"} Oct 06 12:04:51 crc kubenswrapper[4698]: I1006 12:04:51.847457 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f","Type":"ContainerStarted","Data":"8958387c8d3bb15f22f2797b6960cf329405890bb7fffbd2b9ec459f183e082b"} Oct 06 12:04:51 crc kubenswrapper[4698]: I1006 12:04:51.853739 4698 generic.go:334] "Generic (PLEG): container finished" podID="87f9e125-7190-4895-ac10-94ad7e66fea2" containerID="191d539d441bab1ee1e0404ed5e9802a09b8d57c2503f1b82d140a04a499c6e2" exitCode=0 Oct 06 12:04:51 crc kubenswrapper[4698]: I1006 12:04:51.853805 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5041-account-create-f2wjp" event={"ID":"87f9e125-7190-4895-ac10-94ad7e66fea2","Type":"ContainerDied","Data":"191d539d441bab1ee1e0404ed5e9802a09b8d57c2503f1b82d140a04a499c6e2"} Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.876116 4698 generic.go:334] "Generic (PLEG): container finished" podID="b760929b-e89c-4e54-8506-e6a61a100d84" containerID="cf434a5fd4b064e31ddd2d029b4719de2f7d3a0251f98dec49f01db760bfd68c" exitCode=0 Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.876232 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b760929b-e89c-4e54-8506-e6a61a100d84","Type":"ContainerDied","Data":"cf434a5fd4b064e31ddd2d029b4719de2f7d3a0251f98dec49f01db760bfd68c"} Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.877193 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b760929b-e89c-4e54-8506-e6a61a100d84","Type":"ContainerDied","Data":"dfb147fba3f67dc002b2a30033434ab6bb1b37afa4fc20ab1bf34a7f17f2d7fb"} Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.877221 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb147fba3f67dc002b2a30033434ab6bb1b37afa4fc20ab1bf34a7f17f2d7fb" Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.886251 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.965705 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b760929b-e89c-4e54-8506-e6a61a100d84-logs\") pod \"b760929b-e89c-4e54-8506-e6a61a100d84\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.965876 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-combined-ca-bundle\") pod \"b760929b-e89c-4e54-8506-e6a61a100d84\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.965912 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-config-data\") pod \"b760929b-e89c-4e54-8506-e6a61a100d84\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.965953 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-custom-prometheus-ca\") pod \"b760929b-e89c-4e54-8506-e6a61a100d84\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.966253 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67mct\" (UniqueName: \"kubernetes.io/projected/b760929b-e89c-4e54-8506-e6a61a100d84-kube-api-access-67mct\") pod \"b760929b-e89c-4e54-8506-e6a61a100d84\" (UID: \"b760929b-e89c-4e54-8506-e6a61a100d84\") " Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.968920 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b760929b-e89c-4e54-8506-e6a61a100d84-logs" (OuterVolumeSpecName: "logs") pod "b760929b-e89c-4e54-8506-e6a61a100d84" (UID: "b760929b-e89c-4e54-8506-e6a61a100d84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:04:52 crc kubenswrapper[4698]: I1006 12:04:52.973065 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b760929b-e89c-4e54-8506-e6a61a100d84-kube-api-access-67mct" (OuterVolumeSpecName: "kube-api-access-67mct") pod "b760929b-e89c-4e54-8506-e6a61a100d84" (UID: "b760929b-e89c-4e54-8506-e6a61a100d84"). InnerVolumeSpecName "kube-api-access-67mct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.002446 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b760929b-e89c-4e54-8506-e6a61a100d84" (UID: "b760929b-e89c-4e54-8506-e6a61a100d84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.049295 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-config-data" (OuterVolumeSpecName: "config-data") pod "b760929b-e89c-4e54-8506-e6a61a100d84" (UID: "b760929b-e89c-4e54-8506-e6a61a100d84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.049783 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b760929b-e89c-4e54-8506-e6a61a100d84" (UID: "b760929b-e89c-4e54-8506-e6a61a100d84"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.070889 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.070933 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.070942 4698 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b760929b-e89c-4e54-8506-e6a61a100d84-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.070953 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67mct\" (UniqueName: \"kubernetes.io/projected/b760929b-e89c-4e54-8506-e6a61a100d84-kube-api-access-67mct\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.070968 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b760929b-e89c-4e54-8506-e6a61a100d84-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.351582 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5041-account-create-f2wjp" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.480090 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtdcp\" (UniqueName: \"kubernetes.io/projected/87f9e125-7190-4895-ac10-94ad7e66fea2-kube-api-access-rtdcp\") pod \"87f9e125-7190-4895-ac10-94ad7e66fea2\" (UID: \"87f9e125-7190-4895-ac10-94ad7e66fea2\") " Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.489203 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f9e125-7190-4895-ac10-94ad7e66fea2-kube-api-access-rtdcp" (OuterVolumeSpecName: "kube-api-access-rtdcp") pod "87f9e125-7190-4895-ac10-94ad7e66fea2" (UID: "87f9e125-7190-4895-ac10-94ad7e66fea2"). InnerVolumeSpecName "kube-api-access-rtdcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.562685 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ff0e-account-create-mvz4n" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.582901 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtdcp\" (UniqueName: \"kubernetes.io/projected/87f9e125-7190-4895-ac10-94ad7e66fea2-kube-api-access-rtdcp\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.684618 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5qr5\" (UniqueName: \"kubernetes.io/projected/f9a6d350-1567-4fda-bb0b-f7091ccf8bbc-kube-api-access-t5qr5\") pod \"f9a6d350-1567-4fda-bb0b-f7091ccf8bbc\" (UID: \"f9a6d350-1567-4fda-bb0b-f7091ccf8bbc\") " Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.689370 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a6d350-1567-4fda-bb0b-f7091ccf8bbc-kube-api-access-t5qr5" (OuterVolumeSpecName: "kube-api-access-t5qr5") pod "f9a6d350-1567-4fda-bb0b-f7091ccf8bbc" (UID: "f9a6d350-1567-4fda-bb0b-f7091ccf8bbc"). InnerVolumeSpecName "kube-api-access-t5qr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.786817 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5qr5\" (UniqueName: \"kubernetes.io/projected/f9a6d350-1567-4fda-bb0b-f7091ccf8bbc-kube-api-access-t5qr5\") on node \"crc\" DevicePath \"\"" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.889717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f","Type":"ContainerStarted","Data":"5c326ca81a0e921ac7f5c123b04646aee17f44ecf2cb35bfcb758b55458cf2de"} Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.889988 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="ceilometer-central-agent" containerID="cri-o://ed11c13f4112e1cd5f1416fac9822c1a0ee3d376548899053f1591095625ebae" gracePeriod=30 Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.890230 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.890234 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="proxy-httpd" containerID="cri-o://5c326ca81a0e921ac7f5c123b04646aee17f44ecf2cb35bfcb758b55458cf2de" gracePeriod=30 Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.890233 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="sg-core" containerID="cri-o://8958387c8d3bb15f22f2797b6960cf329405890bb7fffbd2b9ec459f183e082b" gracePeriod=30 Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.890241 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="ceilometer-notification-agent" containerID="cri-o://61c75477527aaa23b7cf181d2ef5c70bb32206b6b55461a1b424f70ac72de299" gracePeriod=30 Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.895523 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5041-account-create-f2wjp" event={"ID":"87f9e125-7190-4895-ac10-94ad7e66fea2","Type":"ContainerDied","Data":"79388c6a908524ecf3b7ed71e6ca0294153df764f5eaa92dca353b3d908efb61"} Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.895608 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79388c6a908524ecf3b7ed71e6ca0294153df764f5eaa92dca353b3d908efb61" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.895534 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5041-account-create-f2wjp" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.897928 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.898973 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ff0e-account-create-mvz4n" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.899415 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ff0e-account-create-mvz4n" event={"ID":"f9a6d350-1567-4fda-bb0b-f7091ccf8bbc","Type":"ContainerDied","Data":"2ff081b0a75131dea9c06daaa716068c40a37476cdf025d1eb5ac2160b79de6e"} Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.899518 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ff081b0a75131dea9c06daaa716068c40a37476cdf025d1eb5ac2160b79de6e" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.936637 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7421323600000003 podStartE2EDuration="7.936618826s" podCreationTimestamp="2025-10-06 12:04:46 +0000 UTC" firstStartedPulling="2025-10-06 12:04:47.727158468 +0000 UTC m=+1175.139850641" lastFinishedPulling="2025-10-06 12:04:52.921644934 +0000 UTC m=+1180.334337107" observedRunningTime="2025-10-06 12:04:53.93225479 +0000 UTC m=+1181.344946963" watchObservedRunningTime="2025-10-06 12:04:53.936618826 +0000 UTC m=+1181.349310999" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.961114 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.975525 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.985293 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:04:53 crc kubenswrapper[4698]: E1006 12:04:53.986027 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.986054 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon" Oct 06 12:04:53 crc kubenswrapper[4698]: E1006 12:04:53.986086 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f9e125-7190-4895-ac10-94ad7e66fea2" containerName="mariadb-account-create" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.986097 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f9e125-7190-4895-ac10-94ad7e66fea2" containerName="mariadb-account-create" Oct 06 12:04:53 crc kubenswrapper[4698]: E1006 12:04:53.986114 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a6d350-1567-4fda-bb0b-f7091ccf8bbc" containerName="mariadb-account-create" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.986123 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a6d350-1567-4fda-bb0b-f7091ccf8bbc" containerName="mariadb-account-create" Oct 06 12:04:53 crc kubenswrapper[4698]: E1006 12:04:53.986165 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b760929b-e89c-4e54-8506-e6a61a100d84" containerName="watcher-decision-engine" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.986179 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b760929b-e89c-4e54-8506-e6a61a100d84" containerName="watcher-decision-engine" Oct 06 12:04:53 crc kubenswrapper[4698]: E1006 12:04:53.986196 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon-log" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.986209 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon-log" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.986492 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon-log" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.986526 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ae0d1c-2545-4122-b2d9-3380fd017840" containerName="horizon" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.986545 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b760929b-e89c-4e54-8506-e6a61a100d84" containerName="watcher-decision-engine" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.986565 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f9e125-7190-4895-ac10-94ad7e66fea2" containerName="mariadb-account-create" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.986596 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a6d350-1567-4fda-bb0b-f7091ccf8bbc" containerName="mariadb-account-create" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.987659 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.989833 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Oct 06 12:04:53 crc kubenswrapper[4698]: I1006 12:04:53.994441 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.091844 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.092372 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vml8d\" (UniqueName: \"kubernetes.io/projected/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-kube-api-access-vml8d\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.092404 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-logs\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.092481 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.092503 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.195092 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.195167 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.195264 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.195335 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vml8d\" (UniqueName: \"kubernetes.io/projected/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-kube-api-access-vml8d\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.195372 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-logs\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.196177 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-logs\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.200615 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.200623 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.201800 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.212858 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vml8d\" (UniqueName: \"kubernetes.io/projected/d8e278fa-3dfa-47dd-82b0-7296cc9ef08d-kube-api-access-vml8d\") pod \"watcher-decision-engine-0\" (UID: \"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d\") " pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.416960 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.918249 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.924272 4698 generic.go:334] "Generic (PLEG): container finished" podID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerID="5c326ca81a0e921ac7f5c123b04646aee17f44ecf2cb35bfcb758b55458cf2de" exitCode=0 Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.924363 4698 generic.go:334] "Generic (PLEG): container finished" podID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerID="8958387c8d3bb15f22f2797b6960cf329405890bb7fffbd2b9ec459f183e082b" exitCode=2 Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.924376 4698 generic.go:334] "Generic (PLEG): container finished" podID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerID="61c75477527aaa23b7cf181d2ef5c70bb32206b6b55461a1b424f70ac72de299" exitCode=0 Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.924432 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f","Type":"ContainerDied","Data":"5c326ca81a0e921ac7f5c123b04646aee17f44ecf2cb35bfcb758b55458cf2de"} Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.924514 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f","Type":"ContainerDied","Data":"8958387c8d3bb15f22f2797b6960cf329405890bb7fffbd2b9ec459f183e082b"} Oct 06 12:04:54 crc kubenswrapper[4698]: I1006 12:04:54.924531 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f","Type":"ContainerDied","Data":"61c75477527aaa23b7cf181d2ef5c70bb32206b6b55461a1b424f70ac72de299"} Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.111084 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w87sz"] Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.112907 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.118124 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.121589 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.121716 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7qjx4" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.132404 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w87sz"] Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.222859 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-config-data\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.223049 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.223109 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24rx\" (UniqueName: \"kubernetes.io/projected/8e437d39-eb38-4140-ad48-50740fb31ee4-kube-api-access-m24rx\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.223144 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-scripts\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.324984 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.325064 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m24rx\" (UniqueName: \"kubernetes.io/projected/8e437d39-eb38-4140-ad48-50740fb31ee4-kube-api-access-m24rx\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.325112 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-scripts\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.325181 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-config-data\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.333486 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.335510 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-config-data\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.335779 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-scripts\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.345496 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m24rx\" (UniqueName: \"kubernetes.io/projected/8e437d39-eb38-4140-ad48-50740fb31ee4-kube-api-access-m24rx\") pod \"nova-cell0-conductor-db-sync-w87sz\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.348141 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b760929b-e89c-4e54-8506-e6a61a100d84" path="/var/lib/kubelet/pods/b760929b-e89c-4e54-8506-e6a61a100d84/volumes" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.432126 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.914825 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w87sz"] Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.946527 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.964397 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d","Type":"ContainerStarted","Data":"af21a2c420c46162fb8c42fab2bb0a471e1d4d6a9883b016de8e1d0f1cb802cb"} Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.964464 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"d8e278fa-3dfa-47dd-82b0-7296cc9ef08d","Type":"ContainerStarted","Data":"5b8223564989cb33c5b9f87be860cb76dc1a31e88f3e6a4bb4846736b489cb2d"} Oct 06 12:04:55 crc kubenswrapper[4698]: I1006 12:04:55.990396 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.990367173 podStartE2EDuration="2.990367173s" podCreationTimestamp="2025-10-06 12:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:04:55.987459382 +0000 UTC m=+1183.400151545" watchObservedRunningTime="2025-10-06 12:04:55.990367173 +0000 UTC m=+1183.403059346" Oct 06 12:04:56 crc kubenswrapper[4698]: I1006 12:04:56.978081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w87sz" event={"ID":"8e437d39-eb38-4140-ad48-50740fb31ee4","Type":"ContainerStarted","Data":"13fd2f3dd74f1aaffc5baa04cff54ac6816c48c0a2a66e1aed1235c8fd85fb4c"} Oct 06 12:05:00 crc kubenswrapper[4698]: I1006 12:05:00.026158 4698 generic.go:334] "Generic (PLEG): container finished" podID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerID="ed11c13f4112e1cd5f1416fac9822c1a0ee3d376548899053f1591095625ebae" exitCode=0 Oct 06 12:05:00 crc kubenswrapper[4698]: I1006 12:05:00.026286 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f","Type":"ContainerDied","Data":"ed11c13f4112e1cd5f1416fac9822c1a0ee3d376548899053f1591095625ebae"} Oct 06 12:05:04 crc kubenswrapper[4698]: I1006 12:05:04.418584 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Oct 06 12:05:04 crc kubenswrapper[4698]: I1006 12:05:04.475513 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Oct 06 12:05:04 crc kubenswrapper[4698]: I1006 12:05:04.959807 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.088852 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-config-data\") pod \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.089848 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-run-httpd\") pod \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.090046 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-log-httpd\") pod \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.090199 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-combined-ca-bundle\") pod \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.090355 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-scripts\") pod \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.090526 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-sg-core-conf-yaml\") pod \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.090631 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlb79\" (UniqueName: \"kubernetes.io/projected/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-kube-api-access-dlb79\") pod \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\" (UID: \"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f\") " Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.090369 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" (UID: "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.090527 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" (UID: "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.095728 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-scripts" (OuterVolumeSpecName: "scripts") pod "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" (UID: "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.097678 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-kube-api-access-dlb79" (OuterVolumeSpecName: "kube-api-access-dlb79") pod "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" (UID: "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f"). InnerVolumeSpecName "kube-api-access-dlb79". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.115465 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.115473 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f","Type":"ContainerDied","Data":"3247e657a6adf20819464d132ae94393965b75dd3338b47e4ff5c3231fb07fbb"} Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.115614 4698 scope.go:117] "RemoveContainer" containerID="5c326ca81a0e921ac7f5c123b04646aee17f44ecf2cb35bfcb758b55458cf2de" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.115788 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.125825 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" (UID: "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.163289 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.164081 4698 scope.go:117] "RemoveContainer" containerID="8958387c8d3bb15f22f2797b6960cf329405890bb7fffbd2b9ec459f183e082b" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.193348 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.193387 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.193398 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.193407 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlb79\" (UniqueName: \"kubernetes.io/projected/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-kube-api-access-dlb79\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.193418 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.194537 4698 scope.go:117] "RemoveContainer" containerID="61c75477527aaa23b7cf181d2ef5c70bb32206b6b55461a1b424f70ac72de299" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.217660 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" (UID: "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.221655 4698 scope.go:117] "RemoveContainer" containerID="ed11c13f4112e1cd5f1416fac9822c1a0ee3d376548899053f1591095625ebae" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.295155 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-config-data" (OuterVolumeSpecName: "config-data") pod "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" (UID: "9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.295268 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.397507 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.441156 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.449083 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.475108 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:05 crc kubenswrapper[4698]: E1006 12:05:05.475573 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="ceilometer-central-agent" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.475597 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="ceilometer-central-agent" Oct 06 12:05:05 crc kubenswrapper[4698]: E1006 12:05:05.475625 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="ceilometer-notification-agent" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.475635 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="ceilometer-notification-agent" Oct 06 12:05:05 crc kubenswrapper[4698]: E1006 12:05:05.475656 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="sg-core" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.475665 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="sg-core" Oct 06 12:05:05 crc kubenswrapper[4698]: E1006 12:05:05.475696 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="proxy-httpd" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.475704 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="proxy-httpd" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.476519 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="proxy-httpd" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.476561 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="sg-core" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.476584 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="ceilometer-notification-agent" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.476601 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" containerName="ceilometer-central-agent" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.478891 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.481351 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.488416 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.506221 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.601495 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-run-httpd\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.601543 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.601575 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-scripts\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.601594 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q5sf\" (UniqueName: \"kubernetes.io/projected/f9c6b36c-e81f-40b5-afaa-7bd272193593-kube-api-access-4q5sf\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.601649 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-log-httpd\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.601676 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.601709 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-config-data\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.703872 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-log-httpd\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.703936 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.703983 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-config-data\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.704100 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-run-httpd\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.704160 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.704703 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-run-httpd\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.704707 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-log-httpd\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.704293 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-scripts\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.704802 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q5sf\" (UniqueName: \"kubernetes.io/projected/f9c6b36c-e81f-40b5-afaa-7bd272193593-kube-api-access-4q5sf\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.710936 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.711657 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-scripts\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.712446 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-config-data\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.715564 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.744769 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q5sf\" (UniqueName: \"kubernetes.io/projected/f9c6b36c-e81f-40b5-afaa-7bd272193593-kube-api-access-4q5sf\") pod \"ceilometer-0\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " pod="openstack/ceilometer-0" Oct 06 12:05:05 crc kubenswrapper[4698]: I1006 12:05:05.807488 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:06 crc kubenswrapper[4698]: I1006 12:05:06.139632 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w87sz" event={"ID":"8e437d39-eb38-4140-ad48-50740fb31ee4","Type":"ContainerStarted","Data":"55f917659c6ec1d986c64bb9973cb8c27b97a1fd240079253fb1050b1d4f03e9"} Oct 06 12:05:06 crc kubenswrapper[4698]: I1006 12:05:06.205938 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-w87sz" podStartSLOduration=2.133379573 podStartE2EDuration="11.205913495s" podCreationTimestamp="2025-10-06 12:04:55 +0000 UTC" firstStartedPulling="2025-10-06 12:04:55.946298165 +0000 UTC m=+1183.358990338" lastFinishedPulling="2025-10-06 12:05:05.018832077 +0000 UTC m=+1192.431524260" observedRunningTime="2025-10-06 12:05:06.165126389 +0000 UTC m=+1193.577818562" watchObservedRunningTime="2025-10-06 12:05:06.205913495 +0000 UTC m=+1193.618605668" Oct 06 12:05:06 crc kubenswrapper[4698]: I1006 12:05:06.388357 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:06 crc kubenswrapper[4698]: W1006 12:05:06.389698 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9c6b36c_e81f_40b5_afaa_7bd272193593.slice/crio-00e66cf88393538525d7e3a7c53a64c38a465f7a4ece2dce3e9d02257e7ca6b6 WatchSource:0}: Error finding container 00e66cf88393538525d7e3a7c53a64c38a465f7a4ece2dce3e9d02257e7ca6b6: Status 404 returned error can't find the container with id 00e66cf88393538525d7e3a7c53a64c38a465f7a4ece2dce3e9d02257e7ca6b6 Oct 06 12:05:07 crc kubenswrapper[4698]: I1006 12:05:07.148772 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9c6b36c-e81f-40b5-afaa-7bd272193593","Type":"ContainerStarted","Data":"00e66cf88393538525d7e3a7c53a64c38a465f7a4ece2dce3e9d02257e7ca6b6"} Oct 06 12:05:07 crc kubenswrapper[4698]: I1006 12:05:07.362506 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f" path="/var/lib/kubelet/pods/9f25bd9c-eeac-434f-b9fe-3ee6a0db8d4f/volumes" Oct 06 12:05:08 crc kubenswrapper[4698]: I1006 12:05:08.166169 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9c6b36c-e81f-40b5-afaa-7bd272193593","Type":"ContainerStarted","Data":"6448491d95c9bb63317ba800fe6a4440ae86dd223499ca790b45e057358373e1"} Oct 06 12:05:09 crc kubenswrapper[4698]: I1006 12:05:09.180298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9c6b36c-e81f-40b5-afaa-7bd272193593","Type":"ContainerStarted","Data":"035c22f3505bf614867b7e7ae51e472ee714d89cc992cf0a0f18d54d011e7911"} Oct 06 12:05:09 crc kubenswrapper[4698]: I1006 12:05:09.181187 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9c6b36c-e81f-40b5-afaa-7bd272193593","Type":"ContainerStarted","Data":"b04a7b469a51f3599570cd392e9ca375e79a9fefa48f98747ca76f42d4a369e5"} Oct 06 12:05:11 crc kubenswrapper[4698]: I1006 12:05:11.233692 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9c6b36c-e81f-40b5-afaa-7bd272193593","Type":"ContainerStarted","Data":"667b0a8d129511b4b23097be40355f7cfe184feafba3f6d62f4c742f075508d9"} Oct 06 12:05:11 crc kubenswrapper[4698]: I1006 12:05:11.234493 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:05:11 crc kubenswrapper[4698]: I1006 12:05:11.282587 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9812293410000001 podStartE2EDuration="6.282555021s" podCreationTimestamp="2025-10-06 12:05:05 +0000 UTC" firstStartedPulling="2025-10-06 12:05:06.3918961 +0000 UTC m=+1193.804588283" lastFinishedPulling="2025-10-06 12:05:10.69322175 +0000 UTC m=+1198.105913963" observedRunningTime="2025-10-06 12:05:11.276422591 +0000 UTC m=+1198.689114774" watchObservedRunningTime="2025-10-06 12:05:11.282555021 +0000 UTC m=+1198.695247204" Oct 06 12:05:15 crc kubenswrapper[4698]: I1006 12:05:15.524495 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:15 crc kubenswrapper[4698]: I1006 12:05:15.525586 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="ceilometer-central-agent" containerID="cri-o://6448491d95c9bb63317ba800fe6a4440ae86dd223499ca790b45e057358373e1" gracePeriod=30 Oct 06 12:05:15 crc kubenswrapper[4698]: I1006 12:05:15.526117 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="proxy-httpd" containerID="cri-o://667b0a8d129511b4b23097be40355f7cfe184feafba3f6d62f4c742f075508d9" gracePeriod=30 Oct 06 12:05:15 crc kubenswrapper[4698]: I1006 12:05:15.526170 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="sg-core" containerID="cri-o://035c22f3505bf614867b7e7ae51e472ee714d89cc992cf0a0f18d54d011e7911" gracePeriod=30 Oct 06 12:05:15 crc kubenswrapper[4698]: I1006 12:05:15.526430 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="ceilometer-notification-agent" containerID="cri-o://b04a7b469a51f3599570cd392e9ca375e79a9fefa48f98747ca76f42d4a369e5" gracePeriod=30 Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.301895 4698 generic.go:334] "Generic (PLEG): container finished" podID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerID="667b0a8d129511b4b23097be40355f7cfe184feafba3f6d62f4c742f075508d9" exitCode=0 Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.302285 4698 generic.go:334] "Generic (PLEG): container finished" podID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerID="035c22f3505bf614867b7e7ae51e472ee714d89cc992cf0a0f18d54d011e7911" exitCode=2 Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.302293 4698 generic.go:334] "Generic (PLEG): container finished" podID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerID="b04a7b469a51f3599570cd392e9ca375e79a9fefa48f98747ca76f42d4a369e5" exitCode=0 Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.302303 4698 generic.go:334] "Generic (PLEG): container finished" podID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerID="6448491d95c9bb63317ba800fe6a4440ae86dd223499ca790b45e057358373e1" exitCode=0 Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.302325 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9c6b36c-e81f-40b5-afaa-7bd272193593","Type":"ContainerDied","Data":"667b0a8d129511b4b23097be40355f7cfe184feafba3f6d62f4c742f075508d9"} Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.302353 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9c6b36c-e81f-40b5-afaa-7bd272193593","Type":"ContainerDied","Data":"035c22f3505bf614867b7e7ae51e472ee714d89cc992cf0a0f18d54d011e7911"} Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.302365 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9c6b36c-e81f-40b5-afaa-7bd272193593","Type":"ContainerDied","Data":"b04a7b469a51f3599570cd392e9ca375e79a9fefa48f98747ca76f42d4a369e5"} Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.302375 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9c6b36c-e81f-40b5-afaa-7bd272193593","Type":"ContainerDied","Data":"6448491d95c9bb63317ba800fe6a4440ae86dd223499ca790b45e057358373e1"} Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.469919 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.571287 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-config-data\") pod \"f9c6b36c-e81f-40b5-afaa-7bd272193593\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.571797 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-log-httpd\") pod \"f9c6b36c-e81f-40b5-afaa-7bd272193593\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.571955 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q5sf\" (UniqueName: \"kubernetes.io/projected/f9c6b36c-e81f-40b5-afaa-7bd272193593-kube-api-access-4q5sf\") pod \"f9c6b36c-e81f-40b5-afaa-7bd272193593\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.572000 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-run-httpd\") pod \"f9c6b36c-e81f-40b5-afaa-7bd272193593\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.572051 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-sg-core-conf-yaml\") pod \"f9c6b36c-e81f-40b5-afaa-7bd272193593\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.572075 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-scripts\") pod \"f9c6b36c-e81f-40b5-afaa-7bd272193593\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.572111 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-combined-ca-bundle\") pod \"f9c6b36c-e81f-40b5-afaa-7bd272193593\" (UID: \"f9c6b36c-e81f-40b5-afaa-7bd272193593\") " Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.572542 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9c6b36c-e81f-40b5-afaa-7bd272193593" (UID: "f9c6b36c-e81f-40b5-afaa-7bd272193593"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.572813 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.574249 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9c6b36c-e81f-40b5-afaa-7bd272193593" (UID: "f9c6b36c-e81f-40b5-afaa-7bd272193593"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.581284 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c6b36c-e81f-40b5-afaa-7bd272193593-kube-api-access-4q5sf" (OuterVolumeSpecName: "kube-api-access-4q5sf") pod "f9c6b36c-e81f-40b5-afaa-7bd272193593" (UID: "f9c6b36c-e81f-40b5-afaa-7bd272193593"). InnerVolumeSpecName "kube-api-access-4q5sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.584237 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-scripts" (OuterVolumeSpecName: "scripts") pod "f9c6b36c-e81f-40b5-afaa-7bd272193593" (UID: "f9c6b36c-e81f-40b5-afaa-7bd272193593"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.610042 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9c6b36c-e81f-40b5-afaa-7bd272193593" (UID: "f9c6b36c-e81f-40b5-afaa-7bd272193593"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.673582 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9c6b36c-e81f-40b5-afaa-7bd272193593" (UID: "f9c6b36c-e81f-40b5-afaa-7bd272193593"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.675070 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q5sf\" (UniqueName: \"kubernetes.io/projected/f9c6b36c-e81f-40b5-afaa-7bd272193593-kube-api-access-4q5sf\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.675117 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9c6b36c-e81f-40b5-afaa-7bd272193593-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.675139 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.675153 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.675166 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.719251 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-config-data" (OuterVolumeSpecName: "config-data") pod "f9c6b36c-e81f-40b5-afaa-7bd272193593" (UID: "f9c6b36c-e81f-40b5-afaa-7bd272193593"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:16 crc kubenswrapper[4698]: I1006 12:05:16.777905 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9c6b36c-e81f-40b5-afaa-7bd272193593-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.321051 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9c6b36c-e81f-40b5-afaa-7bd272193593","Type":"ContainerDied","Data":"00e66cf88393538525d7e3a7c53a64c38a465f7a4ece2dce3e9d02257e7ca6b6"} Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.321165 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.321544 4698 scope.go:117] "RemoveContainer" containerID="667b0a8d129511b4b23097be40355f7cfe184feafba3f6d62f4c742f075508d9" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.356040 4698 scope.go:117] "RemoveContainer" containerID="035c22f3505bf614867b7e7ae51e472ee714d89cc992cf0a0f18d54d011e7911" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.398724 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.407659 4698 scope.go:117] "RemoveContainer" containerID="b04a7b469a51f3599570cd392e9ca375e79a9fefa48f98747ca76f42d4a369e5" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.421596 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.431572 4698 scope.go:117] "RemoveContainer" containerID="6448491d95c9bb63317ba800fe6a4440ae86dd223499ca790b45e057358373e1" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.438857 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:17 crc kubenswrapper[4698]: E1006 12:05:17.439382 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="ceilometer-notification-agent" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.439405 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="ceilometer-notification-agent" Oct 06 12:05:17 crc kubenswrapper[4698]: E1006 12:05:17.439435 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="ceilometer-central-agent" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.439443 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="ceilometer-central-agent" Oct 06 12:05:17 crc kubenswrapper[4698]: E1006 12:05:17.439455 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="proxy-httpd" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.439461 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="proxy-httpd" Oct 06 12:05:17 crc kubenswrapper[4698]: E1006 12:05:17.439486 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="sg-core" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.439492 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="sg-core" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.439713 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="sg-core" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.439728 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="proxy-httpd" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.439738 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="ceilometer-central-agent" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.439752 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" containerName="ceilometer-notification-agent" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.442640 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.445547 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.447180 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.462812 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.597271 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.598801 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.598968 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-scripts\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.599205 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-config-data\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.599341 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-log-httpd\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.599480 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqhxf\" (UniqueName: \"kubernetes.io/projected/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-kube-api-access-vqhxf\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.599667 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-run-httpd\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.701610 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-scripts\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.701746 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-config-data\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.701783 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-log-httpd\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.701838 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqhxf\" (UniqueName: \"kubernetes.io/projected/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-kube-api-access-vqhxf\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.701879 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-run-httpd\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.701909 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.701965 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.705611 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-run-httpd\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.705834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-log-httpd\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.709336 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.710258 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-scripts\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.714806 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.717947 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-config-data\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.722324 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqhxf\" (UniqueName: \"kubernetes.io/projected/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-kube-api-access-vqhxf\") pod \"ceilometer-0\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " pod="openstack/ceilometer-0" Oct 06 12:05:17 crc kubenswrapper[4698]: I1006 12:05:17.774202 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:18 crc kubenswrapper[4698]: I1006 12:05:18.280686 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:18 crc kubenswrapper[4698]: I1006 12:05:18.344877 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6","Type":"ContainerStarted","Data":"a67c99f76355770bca5a157399eae56063ce9f7aed3ea8afba81e1c4348dc336"} Oct 06 12:05:19 crc kubenswrapper[4698]: I1006 12:05:19.365368 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c6b36c-e81f-40b5-afaa-7bd272193593" path="/var/lib/kubelet/pods/f9c6b36c-e81f-40b5-afaa-7bd272193593/volumes" Oct 06 12:05:19 crc kubenswrapper[4698]: I1006 12:05:19.368217 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6","Type":"ContainerStarted","Data":"d6e02dbbe1634ea88f3b936f2bab56044ca0c10f1ba2667bb9e228fefcb78f17"} Oct 06 12:05:19 crc kubenswrapper[4698]: I1006 12:05:19.373758 4698 generic.go:334] "Generic (PLEG): container finished" podID="8e437d39-eb38-4140-ad48-50740fb31ee4" containerID="55f917659c6ec1d986c64bb9973cb8c27b97a1fd240079253fb1050b1d4f03e9" exitCode=0 Oct 06 12:05:19 crc kubenswrapper[4698]: I1006 12:05:19.373812 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w87sz" event={"ID":"8e437d39-eb38-4140-ad48-50740fb31ee4","Type":"ContainerDied","Data":"55f917659c6ec1d986c64bb9973cb8c27b97a1fd240079253fb1050b1d4f03e9"} Oct 06 12:05:20 crc kubenswrapper[4698]: I1006 12:05:20.405906 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6","Type":"ContainerStarted","Data":"5ead9debc282b2705dfe0d498dd2ab98beca622c7be67fa68001ddcf3dfd7e74"} Oct 06 12:05:20 crc kubenswrapper[4698]: I1006 12:05:20.990099 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.103414 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-combined-ca-bundle\") pod \"8e437d39-eb38-4140-ad48-50740fb31ee4\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.103571 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m24rx\" (UniqueName: \"kubernetes.io/projected/8e437d39-eb38-4140-ad48-50740fb31ee4-kube-api-access-m24rx\") pod \"8e437d39-eb38-4140-ad48-50740fb31ee4\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.103644 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-config-data\") pod \"8e437d39-eb38-4140-ad48-50740fb31ee4\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.103928 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-scripts\") pod \"8e437d39-eb38-4140-ad48-50740fb31ee4\" (UID: \"8e437d39-eb38-4140-ad48-50740fb31ee4\") " Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.110329 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-scripts" (OuterVolumeSpecName: "scripts") pod "8e437d39-eb38-4140-ad48-50740fb31ee4" (UID: "8e437d39-eb38-4140-ad48-50740fb31ee4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.111331 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e437d39-eb38-4140-ad48-50740fb31ee4-kube-api-access-m24rx" (OuterVolumeSpecName: "kube-api-access-m24rx") pod "8e437d39-eb38-4140-ad48-50740fb31ee4" (UID: "8e437d39-eb38-4140-ad48-50740fb31ee4"). InnerVolumeSpecName "kube-api-access-m24rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.146364 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e437d39-eb38-4140-ad48-50740fb31ee4" (UID: "8e437d39-eb38-4140-ad48-50740fb31ee4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.166885 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-config-data" (OuterVolumeSpecName: "config-data") pod "8e437d39-eb38-4140-ad48-50740fb31ee4" (UID: "8e437d39-eb38-4140-ad48-50740fb31ee4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.208385 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.208446 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.208472 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m24rx\" (UniqueName: \"kubernetes.io/projected/8e437d39-eb38-4140-ad48-50740fb31ee4-kube-api-access-m24rx\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.208492 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e437d39-eb38-4140-ad48-50740fb31ee4-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.420189 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w87sz" event={"ID":"8e437d39-eb38-4140-ad48-50740fb31ee4","Type":"ContainerDied","Data":"13fd2f3dd74f1aaffc5baa04cff54ac6816c48c0a2a66e1aed1235c8fd85fb4c"} Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.420257 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13fd2f3dd74f1aaffc5baa04cff54ac6816c48c0a2a66e1aed1235c8fd85fb4c" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.420213 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w87sz" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.422988 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6","Type":"ContainerStarted","Data":"da65acd0505197b0dfa12126fc5e30e36497f49d32ffa59fd1394adfef7f17ea"} Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.519330 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 12:05:21 crc kubenswrapper[4698]: E1006 12:05:21.519929 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e437d39-eb38-4140-ad48-50740fb31ee4" containerName="nova-cell0-conductor-db-sync" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.519955 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e437d39-eb38-4140-ad48-50740fb31ee4" containerName="nova-cell0-conductor-db-sync" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.520316 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e437d39-eb38-4140-ad48-50740fb31ee4" containerName="nova-cell0-conductor-db-sync" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.522564 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.525467 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.537171 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.540515 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7qjx4" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.627506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300ccf8e-2aa0-41c6-be99-b55c56ac8c73-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"300ccf8e-2aa0-41c6-be99-b55c56ac8c73\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.627746 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skldp\" (UniqueName: \"kubernetes.io/projected/300ccf8e-2aa0-41c6-be99-b55c56ac8c73-kube-api-access-skldp\") pod \"nova-cell0-conductor-0\" (UID: \"300ccf8e-2aa0-41c6-be99-b55c56ac8c73\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.628689 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300ccf8e-2aa0-41c6-be99-b55c56ac8c73-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"300ccf8e-2aa0-41c6-be99-b55c56ac8c73\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.732182 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300ccf8e-2aa0-41c6-be99-b55c56ac8c73-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"300ccf8e-2aa0-41c6-be99-b55c56ac8c73\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.732283 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skldp\" (UniqueName: \"kubernetes.io/projected/300ccf8e-2aa0-41c6-be99-b55c56ac8c73-kube-api-access-skldp\") pod \"nova-cell0-conductor-0\" (UID: \"300ccf8e-2aa0-41c6-be99-b55c56ac8c73\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.732491 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300ccf8e-2aa0-41c6-be99-b55c56ac8c73-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"300ccf8e-2aa0-41c6-be99-b55c56ac8c73\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.747978 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300ccf8e-2aa0-41c6-be99-b55c56ac8c73-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"300ccf8e-2aa0-41c6-be99-b55c56ac8c73\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.751487 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skldp\" (UniqueName: \"kubernetes.io/projected/300ccf8e-2aa0-41c6-be99-b55c56ac8c73-kube-api-access-skldp\") pod \"nova-cell0-conductor-0\" (UID: \"300ccf8e-2aa0-41c6-be99-b55c56ac8c73\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.761612 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300ccf8e-2aa0-41c6-be99-b55c56ac8c73-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"300ccf8e-2aa0-41c6-be99-b55c56ac8c73\") " pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:21 crc kubenswrapper[4698]: I1006 12:05:21.862715 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:22 crc kubenswrapper[4698]: I1006 12:05:22.416577 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 12:05:22 crc kubenswrapper[4698]: W1006 12:05:22.428177 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod300ccf8e_2aa0_41c6_be99_b55c56ac8c73.slice/crio-be1d31151849df533f06fde308aa67e38aed44ee689f25d4cfcffddbe9e8aafc WatchSource:0}: Error finding container be1d31151849df533f06fde308aa67e38aed44ee689f25d4cfcffddbe9e8aafc: Status 404 returned error can't find the container with id be1d31151849df533f06fde308aa67e38aed44ee689f25d4cfcffddbe9e8aafc Oct 06 12:05:23 crc kubenswrapper[4698]: I1006 12:05:23.450102 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6","Type":"ContainerStarted","Data":"43e0f98635075d66a6441c1e9d0a92d29066ec02df18a1f0e6a36893286ade2f"} Oct 06 12:05:23 crc kubenswrapper[4698]: I1006 12:05:23.450584 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:05:23 crc kubenswrapper[4698]: I1006 12:05:23.452071 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"300ccf8e-2aa0-41c6-be99-b55c56ac8c73","Type":"ContainerStarted","Data":"2540158d94f996f9f38a1f51da652f8ce0d3fea083bd48941343f4ce1dec46e3"} Oct 06 12:05:23 crc kubenswrapper[4698]: I1006 12:05:23.452133 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"300ccf8e-2aa0-41c6-be99-b55c56ac8c73","Type":"ContainerStarted","Data":"be1d31151849df533f06fde308aa67e38aed44ee689f25d4cfcffddbe9e8aafc"} Oct 06 12:05:23 crc kubenswrapper[4698]: I1006 12:05:23.452198 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:23 crc kubenswrapper[4698]: I1006 12:05:23.481507 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.368254987 podStartE2EDuration="6.48147669s" podCreationTimestamp="2025-10-06 12:05:17 +0000 UTC" firstStartedPulling="2025-10-06 12:05:18.291724931 +0000 UTC m=+1205.704417124" lastFinishedPulling="2025-10-06 12:05:22.404946644 +0000 UTC m=+1209.817638827" observedRunningTime="2025-10-06 12:05:23.47531114 +0000 UTC m=+1210.888003313" watchObservedRunningTime="2025-10-06 12:05:23.48147669 +0000 UTC m=+1210.894168863" Oct 06 12:05:23 crc kubenswrapper[4698]: I1006 12:05:23.497259 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.497233396 podStartE2EDuration="2.497233396s" podCreationTimestamp="2025-10-06 12:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:23.494704204 +0000 UTC m=+1210.907396387" watchObservedRunningTime="2025-10-06 12:05:23.497233396 +0000 UTC m=+1210.909925589" Oct 06 12:05:31 crc kubenswrapper[4698]: I1006 12:05:31.908173 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.495135 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-b9hwq"] Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.496646 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.506218 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.506569 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.506887 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9hwq"] Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.569697 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lg57\" (UniqueName: \"kubernetes.io/projected/582e7285-37d7-483e-8196-6fbcfe1cc9ec-kube-api-access-2lg57\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.569972 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-config-data\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.570124 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.570197 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-scripts\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.677648 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lg57\" (UniqueName: \"kubernetes.io/projected/582e7285-37d7-483e-8196-6fbcfe1cc9ec-kube-api-access-2lg57\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.678248 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-config-data\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.678300 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.678334 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-scripts\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.685744 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-scripts\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.687630 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.691132 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.693352 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.704901 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-config-data\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.724426 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.741526 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.757005 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lg57\" (UniqueName: \"kubernetes.io/projected/582e7285-37d7-483e-8196-6fbcfe1cc9ec-kube-api-access-2lg57\") pod \"nova-cell0-cell-mapping-b9hwq\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.840265 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.884884 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-logs\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.884931 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-config-data\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.884963 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.885068 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmd2c\" (UniqueName: \"kubernetes.io/projected/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-kube-api-access-gmd2c\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.898203 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.907283 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.919655 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.968107 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.969793 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.988429 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-logs\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.988484 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-config-data\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.988513 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.988568 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmd2c\" (UniqueName: \"kubernetes.io/projected/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-kube-api-access-gmd2c\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:32 crc kubenswrapper[4698]: I1006 12:05:32.988611 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:32.998293 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-logs\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.023371 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.031968 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.032066 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-config-data\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.048092 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmd2c\" (UniqueName: \"kubernetes.io/projected/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-kube-api-access-gmd2c\") pod \"nova-api-0\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " pod="openstack/nova-api-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.090128 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.090212 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-config-data\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.090258 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.090295 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96a1df19-c67d-478f-94ef-73c304baf68e-logs\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.090361 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkdvw\" (UniqueName: \"kubernetes.io/projected/5af883c8-8d15-48a6-8aab-3f648a484b79-kube-api-access-xkdvw\") pod \"nova-cell1-novncproxy-0\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.090388 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpnz4\" (UniqueName: \"kubernetes.io/projected/96a1df19-c67d-478f-94ef-73c304baf68e-kube-api-access-wpnz4\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.090457 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.113116 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.170447 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.179844 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.183284 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.186582 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.192753 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.193301 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-config-data\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.193365 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.193405 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96a1df19-c67d-478f-94ef-73c304baf68e-logs\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.193458 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkdvw\" (UniqueName: \"kubernetes.io/projected/5af883c8-8d15-48a6-8aab-3f648a484b79-kube-api-access-xkdvw\") pod \"nova-cell1-novncproxy-0\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.193478 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpnz4\" (UniqueName: \"kubernetes.io/projected/96a1df19-c67d-478f-94ef-73c304baf68e-kube-api-access-wpnz4\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.193564 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.193622 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.195268 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96a1df19-c67d-478f-94ef-73c304baf68e-logs\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.198672 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-config-data\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.199618 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.202898 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.208853 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.214571 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-ssjhh"] Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.216545 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.233403 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkdvw\" (UniqueName: \"kubernetes.io/projected/5af883c8-8d15-48a6-8aab-3f648a484b79-kube-api-access-xkdvw\") pod \"nova-cell1-novncproxy-0\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.235582 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-ssjhh"] Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.250743 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpnz4\" (UniqueName: \"kubernetes.io/projected/96a1df19-c67d-478f-94ef-73c304baf68e-kube-api-access-wpnz4\") pod \"nova-metadata-0\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.324197 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.324288 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.324413 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.324587 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swm4c\" (UniqueName: \"kubernetes.io/projected/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-kube-api-access-swm4c\") pod \"nova-scheduler-0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.324701 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.324767 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-config\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.324784 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-config-data\") pod \"nova-scheduler-0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.324848 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.324875 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpfd\" (UniqueName: \"kubernetes.io/projected/3df91b94-ef4b-4b23-9401-159a50392bb8-kube-api-access-hbpfd\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.349501 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.404085 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.426989 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.427058 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.427739 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.427824 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swm4c\" (UniqueName: \"kubernetes.io/projected/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-kube-api-access-swm4c\") pod \"nova-scheduler-0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.427910 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.427953 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-config\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.427974 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-config-data\") pod \"nova-scheduler-0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.428020 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.428053 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbpfd\" (UniqueName: \"kubernetes.io/projected/3df91b94-ef4b-4b23-9401-159a50392bb8-kube-api-access-hbpfd\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.431407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.432123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.432646 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-config\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.433370 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.433512 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.441763 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-config-data\") pod \"nova-scheduler-0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.442309 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.450217 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbpfd\" (UniqueName: \"kubernetes.io/projected/3df91b94-ef4b-4b23-9401-159a50392bb8-kube-api-access-hbpfd\") pod \"dnsmasq-dns-845d6d6f59-ssjhh\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.480849 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swm4c\" (UniqueName: \"kubernetes.io/projected/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-kube-api-access-swm4c\") pod \"nova-scheduler-0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.505127 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.618751 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:33 crc kubenswrapper[4698]: I1006 12:05:33.896401 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9hwq"] Oct 06 12:05:34 crc kubenswrapper[4698]: W1006 12:05:34.151957 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af883c8_8d15_48a6_8aab_3f648a484b79.slice/crio-2aefccb2e6bbbd74fb61be2990a80491b0383d7f3f13edb8b09394b6d3a77ad9 WatchSource:0}: Error finding container 2aefccb2e6bbbd74fb61be2990a80491b0383d7f3f13edb8b09394b6d3a77ad9: Status 404 returned error can't find the container with id 2aefccb2e6bbbd74fb61be2990a80491b0383d7f3f13edb8b09394b6d3a77ad9 Oct 06 12:05:34 crc kubenswrapper[4698]: W1006 12:05:34.183115 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcd2a6a4_b040_4cc1_bc76_00ae3c0a1d74.slice/crio-e186765c8d206226749b38f44cd5bf95d54f49a4951bc1b168e4a01fd0e00adb WatchSource:0}: Error finding container e186765c8d206226749b38f44cd5bf95d54f49a4951bc1b168e4a01fd0e00adb: Status 404 returned error can't find the container with id e186765c8d206226749b38f44cd5bf95d54f49a4951bc1b168e4a01fd0e00adb Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.183234 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.228108 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.238160 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.254921 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.269981 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zcd97"] Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.271918 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.276389 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.276512 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.283026 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zcd97"] Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.365549 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-config-data\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.365681 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-scripts\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.365836 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n76c\" (UniqueName: \"kubernetes.io/projected/941bdc31-a448-4fce-910b-54eed75a1974-kube-api-access-9n76c\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.365898 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.476743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-scripts\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.477134 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n76c\" (UniqueName: \"kubernetes.io/projected/941bdc31-a448-4fce-910b-54eed75a1974-kube-api-access-9n76c\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.477244 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.477543 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-config-data\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.486779 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-config-data\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.489073 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-scripts\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.496966 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-ssjhh"] Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.498218 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.500315 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n76c\" (UniqueName: \"kubernetes.io/projected/941bdc31-a448-4fce-910b-54eed75a1974-kube-api-access-9n76c\") pod \"nova-cell1-conductor-db-sync-zcd97\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: W1006 12:05:34.506857 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3df91b94_ef4b_4b23_9401_159a50392bb8.slice/crio-4bbdd591dcacb8f0bc6102c7887b04678755d5acf452ef26e8c140f39ce2adb1 WatchSource:0}: Error finding container 4bbdd591dcacb8f0bc6102c7887b04678755d5acf452ef26e8c140f39ce2adb1: Status 404 returned error can't find the container with id 4bbdd591dcacb8f0bc6102c7887b04678755d5acf452ef26e8c140f39ce2adb1 Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.620679 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.715264 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74","Type":"ContainerStarted","Data":"e186765c8d206226749b38f44cd5bf95d54f49a4951bc1b168e4a01fd0e00adb"} Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.718419 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46e5053d-2e18-4ba7-ae0b-b426e0127ad0","Type":"ContainerStarted","Data":"ad4b81235f1d504366bea1c4b4a0cca43942b25b551eddf04f0bccb03c9533bb"} Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.723572 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9hwq" event={"ID":"582e7285-37d7-483e-8196-6fbcfe1cc9ec","Type":"ContainerStarted","Data":"a74e8cfe356a34926ecc11c993e83499c1c1d8a53f409820a7d75e011ea33b41"} Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.723628 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9hwq" event={"ID":"582e7285-37d7-483e-8196-6fbcfe1cc9ec","Type":"ContainerStarted","Data":"a5d0624874d9921f2e2de6a7ee0405ded6ed2d092865c83513bd5929a196781e"} Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.726571 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96a1df19-c67d-478f-94ef-73c304baf68e","Type":"ContainerStarted","Data":"99629b2d9cc92b885ff9a1b88da708b140c43272ffdff1429d5638b5aee85e18"} Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.729485 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" event={"ID":"3df91b94-ef4b-4b23-9401-159a50392bb8","Type":"ContainerStarted","Data":"4bbdd591dcacb8f0bc6102c7887b04678755d5acf452ef26e8c140f39ce2adb1"} Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.732541 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5af883c8-8d15-48a6-8aab-3f648a484b79","Type":"ContainerStarted","Data":"2aefccb2e6bbbd74fb61be2990a80491b0383d7f3f13edb8b09394b6d3a77ad9"} Oct 06 12:05:34 crc kubenswrapper[4698]: I1006 12:05:34.757207 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-b9hwq" podStartSLOduration=2.757185941 podStartE2EDuration="2.757185941s" podCreationTimestamp="2025-10-06 12:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:34.750098068 +0000 UTC m=+1222.162790251" watchObservedRunningTime="2025-10-06 12:05:34.757185941 +0000 UTC m=+1222.169878124" Oct 06 12:05:35 crc kubenswrapper[4698]: I1006 12:05:35.055584 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zcd97"] Oct 06 12:05:35 crc kubenswrapper[4698]: I1006 12:05:35.759394 4698 generic.go:334] "Generic (PLEG): container finished" podID="3df91b94-ef4b-4b23-9401-159a50392bb8" containerID="88d72f5634ea9e0f80dc6be79c827cf404a823d63f204e0fc84324b6b38ad9f0" exitCode=0 Oct 06 12:05:35 crc kubenswrapper[4698]: I1006 12:05:35.759981 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" event={"ID":"3df91b94-ef4b-4b23-9401-159a50392bb8","Type":"ContainerDied","Data":"88d72f5634ea9e0f80dc6be79c827cf404a823d63f204e0fc84324b6b38ad9f0"} Oct 06 12:05:35 crc kubenswrapper[4698]: I1006 12:05:35.777305 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zcd97" event={"ID":"941bdc31-a448-4fce-910b-54eed75a1974","Type":"ContainerStarted","Data":"41fe2948ac012ee93159502e051c4f083b86682be3350fab5a2a399b5353d058"} Oct 06 12:05:35 crc kubenswrapper[4698]: I1006 12:05:35.777388 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zcd97" event={"ID":"941bdc31-a448-4fce-910b-54eed75a1974","Type":"ContainerStarted","Data":"9fdb275f91662ccc62531740a8df0714ed617a1a58c18f4064e6fe7b8ea00e3a"} Oct 06 12:05:35 crc kubenswrapper[4698]: I1006 12:05:35.835185 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zcd97" podStartSLOduration=1.835163103 podStartE2EDuration="1.835163103s" podCreationTimestamp="2025-10-06 12:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:35.834868685 +0000 UTC m=+1223.247560858" watchObservedRunningTime="2025-10-06 12:05:35.835163103 +0000 UTC m=+1223.247855266" Oct 06 12:05:36 crc kubenswrapper[4698]: I1006 12:05:36.790961 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" event={"ID":"3df91b94-ef4b-4b23-9401-159a50392bb8","Type":"ContainerStarted","Data":"c59ec42cc78888aabc7cb973c252b84ff9936d5ff1c5bce055239ca54295c417"} Oct 06 12:05:36 crc kubenswrapper[4698]: I1006 12:05:36.791543 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:36 crc kubenswrapper[4698]: I1006 12:05:36.840623 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" podStartSLOduration=3.8405875419999997 podStartE2EDuration="3.840587542s" podCreationTimestamp="2025-10-06 12:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:36.823909814 +0000 UTC m=+1224.236601987" watchObservedRunningTime="2025-10-06 12:05:36.840587542 +0000 UTC m=+1224.253279715" Oct 06 12:05:37 crc kubenswrapper[4698]: I1006 12:05:37.559060 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:05:37 crc kubenswrapper[4698]: I1006 12:05:37.571101 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.839047 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46e5053d-2e18-4ba7-ae0b-b426e0127ad0","Type":"ContainerStarted","Data":"3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b"} Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.843544 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96a1df19-c67d-478f-94ef-73c304baf68e","Type":"ContainerStarted","Data":"395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891"} Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.843605 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96a1df19-c67d-478f-94ef-73c304baf68e","Type":"ContainerStarted","Data":"2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99"} Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.843624 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96a1df19-c67d-478f-94ef-73c304baf68e" containerName="nova-metadata-log" containerID="cri-o://2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99" gracePeriod=30 Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.843688 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96a1df19-c67d-478f-94ef-73c304baf68e" containerName="nova-metadata-metadata" containerID="cri-o://395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891" gracePeriod=30 Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.847379 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5af883c8-8d15-48a6-8aab-3f648a484b79","Type":"ContainerStarted","Data":"823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287"} Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.847409 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="5af883c8-8d15-48a6-8aab-3f648a484b79" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287" gracePeriod=30 Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.855982 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74","Type":"ContainerStarted","Data":"c28c4c51da742f5427d5f2ff1a4aac7320a12f96d6774923c5fbbbcab51acc90"} Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.856257 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74","Type":"ContainerStarted","Data":"dc017e1b923a798a9570b294f0e60590f5e853e8c6c601c23cb81124e5b861e3"} Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.882872 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.336984669 podStartE2EDuration="7.882841114s" podCreationTimestamp="2025-10-06 12:05:32 +0000 UTC" firstStartedPulling="2025-10-06 12:05:34.248003868 +0000 UTC m=+1221.660696041" lastFinishedPulling="2025-10-06 12:05:38.793860313 +0000 UTC m=+1226.206552486" observedRunningTime="2025-10-06 12:05:39.861074652 +0000 UTC m=+1227.273766825" watchObservedRunningTime="2025-10-06 12:05:39.882841114 +0000 UTC m=+1227.295533277" Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.886594 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.282652281 podStartE2EDuration="7.886579755s" podCreationTimestamp="2025-10-06 12:05:32 +0000 UTC" firstStartedPulling="2025-10-06 12:05:34.188967815 +0000 UTC m=+1221.601659998" lastFinishedPulling="2025-10-06 12:05:38.792895289 +0000 UTC m=+1226.205587472" observedRunningTime="2025-10-06 12:05:39.882186068 +0000 UTC m=+1227.294878241" watchObservedRunningTime="2025-10-06 12:05:39.886579755 +0000 UTC m=+1227.299271918" Oct 06 12:05:39 crc kubenswrapper[4698]: I1006 12:05:39.923158 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.347716211 podStartE2EDuration="7.923125628s" podCreationTimestamp="2025-10-06 12:05:32 +0000 UTC" firstStartedPulling="2025-10-06 12:05:34.217572194 +0000 UTC m=+1221.630264367" lastFinishedPulling="2025-10-06 12:05:38.792981611 +0000 UTC m=+1226.205673784" observedRunningTime="2025-10-06 12:05:39.908409859 +0000 UTC m=+1227.321102032" watchObservedRunningTime="2025-10-06 12:05:39.923125628 +0000 UTC m=+1227.335817801" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.506144 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.540879 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.924304621 podStartE2EDuration="8.540856823s" podCreationTimestamp="2025-10-06 12:05:32 +0000 UTC" firstStartedPulling="2025-10-06 12:05:34.183211485 +0000 UTC m=+1221.595903658" lastFinishedPulling="2025-10-06 12:05:38.799763687 +0000 UTC m=+1226.212455860" observedRunningTime="2025-10-06 12:05:39.932385794 +0000 UTC m=+1227.345077967" watchObservedRunningTime="2025-10-06 12:05:40.540856823 +0000 UTC m=+1227.953549006" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.563343 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96a1df19-c67d-478f-94ef-73c304baf68e-logs\") pod \"96a1df19-c67d-478f-94ef-73c304baf68e\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.563463 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-combined-ca-bundle\") pod \"96a1df19-c67d-478f-94ef-73c304baf68e\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.563528 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpnz4\" (UniqueName: \"kubernetes.io/projected/96a1df19-c67d-478f-94ef-73c304baf68e-kube-api-access-wpnz4\") pod \"96a1df19-c67d-478f-94ef-73c304baf68e\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.563696 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-config-data\") pod \"96a1df19-c67d-478f-94ef-73c304baf68e\" (UID: \"96a1df19-c67d-478f-94ef-73c304baf68e\") " Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.564956 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96a1df19-c67d-478f-94ef-73c304baf68e-logs" (OuterVolumeSpecName: "logs") pod "96a1df19-c67d-478f-94ef-73c304baf68e" (UID: "96a1df19-c67d-478f-94ef-73c304baf68e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.570969 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a1df19-c67d-478f-94ef-73c304baf68e-kube-api-access-wpnz4" (OuterVolumeSpecName: "kube-api-access-wpnz4") pod "96a1df19-c67d-478f-94ef-73c304baf68e" (UID: "96a1df19-c67d-478f-94ef-73c304baf68e"). InnerVolumeSpecName "kube-api-access-wpnz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.599806 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96a1df19-c67d-478f-94ef-73c304baf68e" (UID: "96a1df19-c67d-478f-94ef-73c304baf68e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.600446 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-config-data" (OuterVolumeSpecName: "config-data") pod "96a1df19-c67d-478f-94ef-73c304baf68e" (UID: "96a1df19-c67d-478f-94ef-73c304baf68e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.666329 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.666366 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpnz4\" (UniqueName: \"kubernetes.io/projected/96a1df19-c67d-478f-94ef-73c304baf68e-kube-api-access-wpnz4\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.666379 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a1df19-c67d-478f-94ef-73c304baf68e-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.666389 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96a1df19-c67d-478f-94ef-73c304baf68e-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.871549 4698 generic.go:334] "Generic (PLEG): container finished" podID="96a1df19-c67d-478f-94ef-73c304baf68e" containerID="395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891" exitCode=0 Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.872001 4698 generic.go:334] "Generic (PLEG): container finished" podID="96a1df19-c67d-478f-94ef-73c304baf68e" containerID="2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99" exitCode=143 Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.871611 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.871608 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96a1df19-c67d-478f-94ef-73c304baf68e","Type":"ContainerDied","Data":"395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891"} Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.872096 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96a1df19-c67d-478f-94ef-73c304baf68e","Type":"ContainerDied","Data":"2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99"} Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.872120 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96a1df19-c67d-478f-94ef-73c304baf68e","Type":"ContainerDied","Data":"99629b2d9cc92b885ff9a1b88da708b140c43272ffdff1429d5638b5aee85e18"} Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.872138 4698 scope.go:117] "RemoveContainer" containerID="395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.924811 4698 scope.go:117] "RemoveContainer" containerID="2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.993641 4698 scope.go:117] "RemoveContainer" containerID="395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891" Oct 06 12:05:40 crc kubenswrapper[4698]: E1006 12:05:40.994616 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891\": container with ID starting with 395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891 not found: ID does not exist" containerID="395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.994729 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891"} err="failed to get container status \"395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891\": rpc error: code = NotFound desc = could not find container \"395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891\": container with ID starting with 395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891 not found: ID does not exist" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.994798 4698 scope.go:117] "RemoveContainer" containerID="2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99" Oct 06 12:05:40 crc kubenswrapper[4698]: E1006 12:05:40.997221 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99\": container with ID starting with 2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99 not found: ID does not exist" containerID="2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.997273 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99"} err="failed to get container status \"2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99\": rpc error: code = NotFound desc = could not find container \"2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99\": container with ID starting with 2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99 not found: ID does not exist" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.997302 4698 scope.go:117] "RemoveContainer" containerID="395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.997658 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891"} err="failed to get container status \"395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891\": rpc error: code = NotFound desc = could not find container \"395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891\": container with ID starting with 395205e51385ca9d00095d81ce28e292f4df7e36042ba491c3630622eb12b891 not found: ID does not exist" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.997681 4698 scope.go:117] "RemoveContainer" containerID="2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99" Oct 06 12:05:40 crc kubenswrapper[4698]: I1006 12:05:40.997931 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99"} err="failed to get container status \"2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99\": rpc error: code = NotFound desc = could not find container \"2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99\": container with ID starting with 2c5b771c3b4ef819f52e4dee51efb808f6a8744afc4e21df177857694950ba99 not found: ID does not exist" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.003330 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.012834 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.025095 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:41 crc kubenswrapper[4698]: E1006 12:05:41.025885 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a1df19-c67d-478f-94ef-73c304baf68e" containerName="nova-metadata-log" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.025915 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a1df19-c67d-478f-94ef-73c304baf68e" containerName="nova-metadata-log" Oct 06 12:05:41 crc kubenswrapper[4698]: E1006 12:05:41.025947 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a1df19-c67d-478f-94ef-73c304baf68e" containerName="nova-metadata-metadata" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.025958 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a1df19-c67d-478f-94ef-73c304baf68e" containerName="nova-metadata-metadata" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.026293 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a1df19-c67d-478f-94ef-73c304baf68e" containerName="nova-metadata-log" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.026346 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a1df19-c67d-478f-94ef-73c304baf68e" containerName="nova-metadata-metadata" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.028371 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.031562 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.033003 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.040472 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.079289 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.079703 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-logs\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.079897 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76454\" (UniqueName: \"kubernetes.io/projected/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-kube-api-access-76454\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.079965 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-config-data\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.080289 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.183357 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-logs\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.183457 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76454\" (UniqueName: \"kubernetes.io/projected/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-kube-api-access-76454\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.183496 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-config-data\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.183579 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.183704 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.187110 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-logs\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.190373 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.191047 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.201531 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-config-data\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.209872 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76454\" (UniqueName: \"kubernetes.io/projected/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-kube-api-access-76454\") pod \"nova-metadata-0\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.346998 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a1df19-c67d-478f-94ef-73c304baf68e" path="/var/lib/kubelet/pods/96a1df19-c67d-478f-94ef-73c304baf68e/volumes" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.350488 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:41 crc kubenswrapper[4698]: I1006 12:05:41.868210 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:41 crc kubenswrapper[4698]: W1006 12:05:41.875819 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb6c8251_edb8_421f_b6d4_ff703bf9dfad.slice/crio-b4cc3e8107b72fcae81ff6ae07af7233e609b984dfefa0e1620a3a954b985cd9 WatchSource:0}: Error finding container b4cc3e8107b72fcae81ff6ae07af7233e609b984dfefa0e1620a3a954b985cd9: Status 404 returned error can't find the container with id b4cc3e8107b72fcae81ff6ae07af7233e609b984dfefa0e1620a3a954b985cd9 Oct 06 12:05:42 crc kubenswrapper[4698]: I1006 12:05:42.917896 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb6c8251-edb8-421f-b6d4-ff703bf9dfad","Type":"ContainerStarted","Data":"84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806"} Oct 06 12:05:42 crc kubenswrapper[4698]: I1006 12:05:42.918897 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb6c8251-edb8-421f-b6d4-ff703bf9dfad","Type":"ContainerStarted","Data":"7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a"} Oct 06 12:05:42 crc kubenswrapper[4698]: I1006 12:05:42.918925 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb6c8251-edb8-421f-b6d4-ff703bf9dfad","Type":"ContainerStarted","Data":"b4cc3e8107b72fcae81ff6ae07af7233e609b984dfefa0e1620a3a954b985cd9"} Oct 06 12:05:42 crc kubenswrapper[4698]: I1006 12:05:42.962819 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.962780424 podStartE2EDuration="2.962780424s" podCreationTimestamp="2025-10-06 12:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:42.948903421 +0000 UTC m=+1230.361595604" watchObservedRunningTime="2025-10-06 12:05:42.962780424 +0000 UTC m=+1230.375472597" Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.172473 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.173260 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.405249 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.506343 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.506408 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.575893 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.622300 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.696120 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-fm462"] Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.696381 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-fm462" podUID="99dbf7f0-0e13-422a-bf7d-4060cc043b06" containerName="dnsmasq-dns" containerID="cri-o://cd3536548f888fccd268a5f94fdada0b9e621761a90ab63a23b09e2a29f3a4ac" gracePeriod=10 Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.928386 4698 generic.go:334] "Generic (PLEG): container finished" podID="582e7285-37d7-483e-8196-6fbcfe1cc9ec" containerID="a74e8cfe356a34926ecc11c993e83499c1c1d8a53f409820a7d75e011ea33b41" exitCode=0 Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.928952 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9hwq" event={"ID":"582e7285-37d7-483e-8196-6fbcfe1cc9ec","Type":"ContainerDied","Data":"a74e8cfe356a34926ecc11c993e83499c1c1d8a53f409820a7d75e011ea33b41"} Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.930627 4698 generic.go:334] "Generic (PLEG): container finished" podID="99dbf7f0-0e13-422a-bf7d-4060cc043b06" containerID="cd3536548f888fccd268a5f94fdada0b9e621761a90ab63a23b09e2a29f3a4ac" exitCode=0 Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.931071 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-fm462" event={"ID":"99dbf7f0-0e13-422a-bf7d-4060cc043b06","Type":"ContainerDied","Data":"cd3536548f888fccd268a5f94fdada0b9e621761a90ab63a23b09e2a29f3a4ac"} Oct 06 12:05:43 crc kubenswrapper[4698]: I1006 12:05:43.964865 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.258401 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.258785 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.364465 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.394096 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-swift-storage-0\") pod \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.395961 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-sb\") pod \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.402414 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-svc\") pod \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.402930 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-nb\") pod \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.403129 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-config\") pod \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.403389 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb97f\" (UniqueName: \"kubernetes.io/projected/99dbf7f0-0e13-422a-bf7d-4060cc043b06-kube-api-access-rb97f\") pod \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\" (UID: \"99dbf7f0-0e13-422a-bf7d-4060cc043b06\") " Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.411468 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99dbf7f0-0e13-422a-bf7d-4060cc043b06-kube-api-access-rb97f" (OuterVolumeSpecName: "kube-api-access-rb97f") pod "99dbf7f0-0e13-422a-bf7d-4060cc043b06" (UID: "99dbf7f0-0e13-422a-bf7d-4060cc043b06"). InnerVolumeSpecName "kube-api-access-rb97f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.466904 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "99dbf7f0-0e13-422a-bf7d-4060cc043b06" (UID: "99dbf7f0-0e13-422a-bf7d-4060cc043b06"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.489365 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99dbf7f0-0e13-422a-bf7d-4060cc043b06" (UID: "99dbf7f0-0e13-422a-bf7d-4060cc043b06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.498100 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "99dbf7f0-0e13-422a-bf7d-4060cc043b06" (UID: "99dbf7f0-0e13-422a-bf7d-4060cc043b06"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.509047 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.509130 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.509141 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.509150 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb97f\" (UniqueName: \"kubernetes.io/projected/99dbf7f0-0e13-422a-bf7d-4060cc043b06-kube-api-access-rb97f\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.517864 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-config" (OuterVolumeSpecName: "config") pod "99dbf7f0-0e13-422a-bf7d-4060cc043b06" (UID: "99dbf7f0-0e13-422a-bf7d-4060cc043b06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.525472 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "99dbf7f0-0e13-422a-bf7d-4060cc043b06" (UID: "99dbf7f0-0e13-422a-bf7d-4060cc043b06"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.611737 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.611786 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99dbf7f0-0e13-422a-bf7d-4060cc043b06-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.946341 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-fm462" event={"ID":"99dbf7f0-0e13-422a-bf7d-4060cc043b06","Type":"ContainerDied","Data":"c776f751128f14d422aa0b0ab8d89e681052e0ae0dd0759b5a21171dac1516ff"} Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.946437 4698 scope.go:117] "RemoveContainer" containerID="cd3536548f888fccd268a5f94fdada0b9e621761a90ab63a23b09e2a29f3a4ac" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.946470 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-fm462" Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.949209 4698 generic.go:334] "Generic (PLEG): container finished" podID="941bdc31-a448-4fce-910b-54eed75a1974" containerID="41fe2948ac012ee93159502e051c4f083b86682be3350fab5a2a399b5353d058" exitCode=0 Oct 06 12:05:44 crc kubenswrapper[4698]: I1006 12:05:44.954214 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zcd97" event={"ID":"941bdc31-a448-4fce-910b-54eed75a1974","Type":"ContainerDied","Data":"41fe2948ac012ee93159502e051c4f083b86682be3350fab5a2a399b5353d058"} Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:44.999496 4698 scope.go:117] "RemoveContainer" containerID="67dededc0c214ef5cef31776e9b9e6139c6f099a428f6650e8c9da24822ea744" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.037287 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-fm462"] Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.047086 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-fm462"] Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.348523 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99dbf7f0-0e13-422a-bf7d-4060cc043b06" path="/var/lib/kubelet/pods/99dbf7f0-0e13-422a-bf7d-4060cc043b06/volumes" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.368533 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.433988 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-combined-ca-bundle\") pod \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.434193 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-scripts\") pod \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.434339 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-config-data\") pod \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.434396 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lg57\" (UniqueName: \"kubernetes.io/projected/582e7285-37d7-483e-8196-6fbcfe1cc9ec-kube-api-access-2lg57\") pod \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\" (UID: \"582e7285-37d7-483e-8196-6fbcfe1cc9ec\") " Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.442195 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582e7285-37d7-483e-8196-6fbcfe1cc9ec-kube-api-access-2lg57" (OuterVolumeSpecName: "kube-api-access-2lg57") pod "582e7285-37d7-483e-8196-6fbcfe1cc9ec" (UID: "582e7285-37d7-483e-8196-6fbcfe1cc9ec"). InnerVolumeSpecName "kube-api-access-2lg57". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.447255 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-scripts" (OuterVolumeSpecName: "scripts") pod "582e7285-37d7-483e-8196-6fbcfe1cc9ec" (UID: "582e7285-37d7-483e-8196-6fbcfe1cc9ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.468607 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-config-data" (OuterVolumeSpecName: "config-data") pod "582e7285-37d7-483e-8196-6fbcfe1cc9ec" (UID: "582e7285-37d7-483e-8196-6fbcfe1cc9ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.475129 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "582e7285-37d7-483e-8196-6fbcfe1cc9ec" (UID: "582e7285-37d7-483e-8196-6fbcfe1cc9ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.537593 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.537630 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.537641 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582e7285-37d7-483e-8196-6fbcfe1cc9ec-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.537651 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lg57\" (UniqueName: \"kubernetes.io/projected/582e7285-37d7-483e-8196-6fbcfe1cc9ec-kube-api-access-2lg57\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.968692 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9hwq" Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.969268 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9hwq" event={"ID":"582e7285-37d7-483e-8196-6fbcfe1cc9ec","Type":"ContainerDied","Data":"a5d0624874d9921f2e2de6a7ee0405ded6ed2d092865c83513bd5929a196781e"} Oct 06 12:05:45 crc kubenswrapper[4698]: I1006 12:05:45.969354 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5d0624874d9921f2e2de6a7ee0405ded6ed2d092865c83513bd5929a196781e" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.181181 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.181520 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerName="nova-api-log" containerID="cri-o://dc017e1b923a798a9570b294f0e60590f5e853e8c6c601c23cb81124e5b861e3" gracePeriod=30 Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.181623 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerName="nova-api-api" containerID="cri-o://c28c4c51da742f5427d5f2ff1a4aac7320a12f96d6774923c5fbbbcab51acc90" gracePeriod=30 Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.195721 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.195995 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="46e5053d-2e18-4ba7-ae0b-b426e0127ad0" containerName="nova-scheduler-scheduler" containerID="cri-o://3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b" gracePeriod=30 Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.264857 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.265118 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" containerName="nova-metadata-log" containerID="cri-o://7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a" gracePeriod=30 Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.265572 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" containerName="nova-metadata-metadata" containerID="cri-o://84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806" gracePeriod=30 Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.351647 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.351751 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.563760 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.661756 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-scripts\") pod \"941bdc31-a448-4fce-910b-54eed75a1974\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.662146 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-combined-ca-bundle\") pod \"941bdc31-a448-4fce-910b-54eed75a1974\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.662276 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n76c\" (UniqueName: \"kubernetes.io/projected/941bdc31-a448-4fce-910b-54eed75a1974-kube-api-access-9n76c\") pod \"941bdc31-a448-4fce-910b-54eed75a1974\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.662349 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-config-data\") pod \"941bdc31-a448-4fce-910b-54eed75a1974\" (UID: \"941bdc31-a448-4fce-910b-54eed75a1974\") " Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.669567 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-scripts" (OuterVolumeSpecName: "scripts") pod "941bdc31-a448-4fce-910b-54eed75a1974" (UID: "941bdc31-a448-4fce-910b-54eed75a1974"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.669833 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941bdc31-a448-4fce-910b-54eed75a1974-kube-api-access-9n76c" (OuterVolumeSpecName: "kube-api-access-9n76c") pod "941bdc31-a448-4fce-910b-54eed75a1974" (UID: "941bdc31-a448-4fce-910b-54eed75a1974"). InnerVolumeSpecName "kube-api-access-9n76c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.706209 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-config-data" (OuterVolumeSpecName: "config-data") pod "941bdc31-a448-4fce-910b-54eed75a1974" (UID: "941bdc31-a448-4fce-910b-54eed75a1974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.706752 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "941bdc31-a448-4fce-910b-54eed75a1974" (UID: "941bdc31-a448-4fce-910b-54eed75a1974"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.764481 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.764529 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n76c\" (UniqueName: \"kubernetes.io/projected/941bdc31-a448-4fce-910b-54eed75a1974-kube-api-access-9n76c\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.764541 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.764550 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/941bdc31-a448-4fce-910b-54eed75a1974-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.841924 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.869055 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-logs\") pod \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.869162 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-config-data\") pod \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.869294 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76454\" (UniqueName: \"kubernetes.io/projected/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-kube-api-access-76454\") pod \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.869340 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-combined-ca-bundle\") pod \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.869374 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-nova-metadata-tls-certs\") pod \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\" (UID: \"fb6c8251-edb8-421f-b6d4-ff703bf9dfad\") " Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.870510 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-logs" (OuterVolumeSpecName: "logs") pod "fb6c8251-edb8-421f-b6d4-ff703bf9dfad" (UID: "fb6c8251-edb8-421f-b6d4-ff703bf9dfad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.887600 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-kube-api-access-76454" (OuterVolumeSpecName: "kube-api-access-76454") pod "fb6c8251-edb8-421f-b6d4-ff703bf9dfad" (UID: "fb6c8251-edb8-421f-b6d4-ff703bf9dfad"). InnerVolumeSpecName "kube-api-access-76454". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.909389 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb6c8251-edb8-421f-b6d4-ff703bf9dfad" (UID: "fb6c8251-edb8-421f-b6d4-ff703bf9dfad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.913186 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-config-data" (OuterVolumeSpecName: "config-data") pod "fb6c8251-edb8-421f-b6d4-ff703bf9dfad" (UID: "fb6c8251-edb8-421f-b6d4-ff703bf9dfad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.952329 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fb6c8251-edb8-421f-b6d4-ff703bf9dfad" (UID: "fb6c8251-edb8-421f-b6d4-ff703bf9dfad"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.973597 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76454\" (UniqueName: \"kubernetes.io/projected/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-kube-api-access-76454\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.973644 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.973661 4698 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.973675 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.973690 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c8251-edb8-421f-b6d4-ff703bf9dfad-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.986307 4698 generic.go:334] "Generic (PLEG): container finished" podID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerID="dc017e1b923a798a9570b294f0e60590f5e853e8c6c601c23cb81124e5b861e3" exitCode=143 Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.986428 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74","Type":"ContainerDied","Data":"dc017e1b923a798a9570b294f0e60590f5e853e8c6c601c23cb81124e5b861e3"} Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.991625 4698 generic.go:334] "Generic (PLEG): container finished" podID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" containerID="84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806" exitCode=0 Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.991664 4698 generic.go:334] "Generic (PLEG): container finished" podID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" containerID="7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a" exitCode=143 Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.991717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb6c8251-edb8-421f-b6d4-ff703bf9dfad","Type":"ContainerDied","Data":"84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806"} Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.991749 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb6c8251-edb8-421f-b6d4-ff703bf9dfad","Type":"ContainerDied","Data":"7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a"} Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.991763 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb6c8251-edb8-421f-b6d4-ff703bf9dfad","Type":"ContainerDied","Data":"b4cc3e8107b72fcae81ff6ae07af7233e609b984dfefa0e1620a3a954b985cd9"} Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.991782 4698 scope.go:117] "RemoveContainer" containerID="84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.991929 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.996882 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zcd97" event={"ID":"941bdc31-a448-4fce-910b-54eed75a1974","Type":"ContainerDied","Data":"9fdb275f91662ccc62531740a8df0714ed617a1a58c18f4064e6fe7b8ea00e3a"} Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.996939 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fdb275f91662ccc62531740a8df0714ed617a1a58c18f4064e6fe7b8ea00e3a" Oct 06 12:05:46 crc kubenswrapper[4698]: I1006 12:05:46.997024 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zcd97" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.041429 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.080314 4698 scope.go:117] "RemoveContainer" containerID="7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.082657 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.104174 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:47 crc kubenswrapper[4698]: E1006 12:05:47.104853 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582e7285-37d7-483e-8196-6fbcfe1cc9ec" containerName="nova-manage" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.104873 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="582e7285-37d7-483e-8196-6fbcfe1cc9ec" containerName="nova-manage" Oct 06 12:05:47 crc kubenswrapper[4698]: E1006 12:05:47.104885 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" containerName="nova-metadata-log" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.104892 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" containerName="nova-metadata-log" Oct 06 12:05:47 crc kubenswrapper[4698]: E1006 12:05:47.104903 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" containerName="nova-metadata-metadata" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.104910 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" containerName="nova-metadata-metadata" Oct 06 12:05:47 crc kubenswrapper[4698]: E1006 12:05:47.104925 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99dbf7f0-0e13-422a-bf7d-4060cc043b06" containerName="dnsmasq-dns" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.104933 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99dbf7f0-0e13-422a-bf7d-4060cc043b06" containerName="dnsmasq-dns" Oct 06 12:05:47 crc kubenswrapper[4698]: E1006 12:05:47.104949 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99dbf7f0-0e13-422a-bf7d-4060cc043b06" containerName="init" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.104955 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99dbf7f0-0e13-422a-bf7d-4060cc043b06" containerName="init" Oct 06 12:05:47 crc kubenswrapper[4698]: E1006 12:05:47.104969 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941bdc31-a448-4fce-910b-54eed75a1974" containerName="nova-cell1-conductor-db-sync" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.104976 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="941bdc31-a448-4fce-910b-54eed75a1974" containerName="nova-cell1-conductor-db-sync" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.111910 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="941bdc31-a448-4fce-910b-54eed75a1974" containerName="nova-cell1-conductor-db-sync" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.111987 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="99dbf7f0-0e13-422a-bf7d-4060cc043b06" containerName="dnsmasq-dns" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.112033 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="582e7285-37d7-483e-8196-6fbcfe1cc9ec" containerName="nova-manage" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.112060 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" containerName="nova-metadata-metadata" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.112075 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" containerName="nova-metadata-log" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.119660 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.134150 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.134538 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.141188 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.142618 4698 scope.go:117] "RemoveContainer" containerID="84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806" Oct 06 12:05:47 crc kubenswrapper[4698]: E1006 12:05:47.144613 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806\": container with ID starting with 84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806 not found: ID does not exist" containerID="84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.144818 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806"} err="failed to get container status \"84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806\": rpc error: code = NotFound desc = could not find container \"84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806\": container with ID starting with 84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806 not found: ID does not exist" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.144927 4698 scope.go:117] "RemoveContainer" containerID="7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a" Oct 06 12:05:47 crc kubenswrapper[4698]: E1006 12:05:47.145504 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a\": container with ID starting with 7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a not found: ID does not exist" containerID="7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.145785 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a"} err="failed to get container status \"7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a\": rpc error: code = NotFound desc = could not find container \"7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a\": container with ID starting with 7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a not found: ID does not exist" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.145862 4698 scope.go:117] "RemoveContainer" containerID="84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.146431 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806"} err="failed to get container status \"84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806\": rpc error: code = NotFound desc = could not find container \"84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806\": container with ID starting with 84033c5bbbe0dca618988713c92a87f82f2adc61242f518405c0dac31e8a0806 not found: ID does not exist" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.146522 4698 scope.go:117] "RemoveContainer" containerID="7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.149598 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a"} err="failed to get container status \"7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a\": rpc error: code = NotFound desc = could not find container \"7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a\": container with ID starting with 7beb438bb4a866ac2d7bfb0c2484116a06804cd9989e87c9b2f7f87393436a2a not found: ID does not exist" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.176642 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.180000 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.182665 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.187696 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-logs\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.187752 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.187792 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.187831 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-config-data\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.187868 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5l9\" (UniqueName: \"kubernetes.io/projected/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-kube-api-access-6z5l9\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.191360 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.307630 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb56a27-9290-42b9-9936-6de34abca79c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fdb56a27-9290-42b9-9936-6de34abca79c\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.307917 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdpqj\" (UniqueName: \"kubernetes.io/projected/fdb56a27-9290-42b9-9936-6de34abca79c-kube-api-access-vdpqj\") pod \"nova-cell1-conductor-0\" (UID: \"fdb56a27-9290-42b9-9936-6de34abca79c\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.310053 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-logs\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.310314 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.320147 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.320355 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-config-data\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.320463 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z5l9\" (UniqueName: \"kubernetes.io/projected/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-kube-api-access-6z5l9\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.320493 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb56a27-9290-42b9-9936-6de34abca79c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fdb56a27-9290-42b9-9936-6de34abca79c\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.312146 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-logs\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.319972 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.348659 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z5l9\" (UniqueName: \"kubernetes.io/projected/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-kube-api-access-6z5l9\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.354480 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.356112 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb6c8251-edb8-421f-b6d4-ff703bf9dfad" path="/var/lib/kubelet/pods/fb6c8251-edb8-421f-b6d4-ff703bf9dfad/volumes" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.366510 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-config-data\") pod \"nova-metadata-0\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.433589 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb56a27-9290-42b9-9936-6de34abca79c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fdb56a27-9290-42b9-9936-6de34abca79c\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.434040 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb56a27-9290-42b9-9936-6de34abca79c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fdb56a27-9290-42b9-9936-6de34abca79c\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.434223 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdpqj\" (UniqueName: \"kubernetes.io/projected/fdb56a27-9290-42b9-9936-6de34abca79c-kube-api-access-vdpqj\") pod \"nova-cell1-conductor-0\" (UID: \"fdb56a27-9290-42b9-9936-6de34abca79c\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.440756 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb56a27-9290-42b9-9936-6de34abca79c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fdb56a27-9290-42b9-9936-6de34abca79c\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.440893 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb56a27-9290-42b9-9936-6de34abca79c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fdb56a27-9290-42b9-9936-6de34abca79c\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.452538 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdpqj\" (UniqueName: \"kubernetes.io/projected/fdb56a27-9290-42b9-9936-6de34abca79c-kube-api-access-vdpqj\") pod \"nova-cell1-conductor-0\" (UID: \"fdb56a27-9290-42b9-9936-6de34abca79c\") " pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.479828 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.530304 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.595147 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.749417 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swm4c\" (UniqueName: \"kubernetes.io/projected/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-kube-api-access-swm4c\") pod \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.749469 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-config-data\") pod \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.749718 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-combined-ca-bundle\") pod \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\" (UID: \"46e5053d-2e18-4ba7-ae0b-b426e0127ad0\") " Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.766031 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-kube-api-access-swm4c" (OuterVolumeSpecName: "kube-api-access-swm4c") pod "46e5053d-2e18-4ba7-ae0b-b426e0127ad0" (UID: "46e5053d-2e18-4ba7-ae0b-b426e0127ad0"). InnerVolumeSpecName "kube-api-access-swm4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.796346 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46e5053d-2e18-4ba7-ae0b-b426e0127ad0" (UID: "46e5053d-2e18-4ba7-ae0b-b426e0127ad0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.801345 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-config-data" (OuterVolumeSpecName: "config-data") pod "46e5053d-2e18-4ba7-ae0b-b426e0127ad0" (UID: "46e5053d-2e18-4ba7-ae0b-b426e0127ad0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.806686 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.860593 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swm4c\" (UniqueName: \"kubernetes.io/projected/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-kube-api-access-swm4c\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.860667 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:47 crc kubenswrapper[4698]: I1006 12:05:47.860678 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e5053d-2e18-4ba7-ae0b-b426e0127ad0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.051441 4698 generic.go:334] "Generic (PLEG): container finished" podID="46e5053d-2e18-4ba7-ae0b-b426e0127ad0" containerID="3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b" exitCode=0 Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.051488 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46e5053d-2e18-4ba7-ae0b-b426e0127ad0","Type":"ContainerDied","Data":"3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b"} Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.051518 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46e5053d-2e18-4ba7-ae0b-b426e0127ad0","Type":"ContainerDied","Data":"ad4b81235f1d504366bea1c4b4a0cca43942b25b551eddf04f0bccb03c9533bb"} Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.051537 4698 scope.go:117] "RemoveContainer" containerID="3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.051649 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.082353 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.133555 4698 scope.go:117] "RemoveContainer" containerID="3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b" Oct 06 12:05:48 crc kubenswrapper[4698]: E1006 12:05:48.146142 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b\": container with ID starting with 3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b not found: ID does not exist" containerID="3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.146192 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b"} err="failed to get container status \"3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b\": rpc error: code = NotFound desc = could not find container \"3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b\": container with ID starting with 3d36a528042fa0bd34586e584412e90d63a9bf2a02a55552ce4104450256c38b not found: ID does not exist" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.166195 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.224067 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.263594 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.273678 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:48 crc kubenswrapper[4698]: E1006 12:05:48.274215 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e5053d-2e18-4ba7-ae0b-b426e0127ad0" containerName="nova-scheduler-scheduler" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.274237 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e5053d-2e18-4ba7-ae0b-b426e0127ad0" containerName="nova-scheduler-scheduler" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.274477 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e5053d-2e18-4ba7-ae0b-b426e0127ad0" containerName="nova-scheduler-scheduler" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.279982 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.283494 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.290204 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.379648 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-config-data\") pod \"nova-scheduler-0\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.380132 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.380288 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9dl6\" (UniqueName: \"kubernetes.io/projected/6b4d3b77-9014-4eec-96e1-c31df74e6a14-kube-api-access-l9dl6\") pod \"nova-scheduler-0\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.482763 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.482995 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9dl6\" (UniqueName: \"kubernetes.io/projected/6b4d3b77-9014-4eec-96e1-c31df74e6a14-kube-api-access-l9dl6\") pod \"nova-scheduler-0\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.483054 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-config-data\") pod \"nova-scheduler-0\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.499332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-config-data\") pod \"nova-scheduler-0\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.500157 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.502905 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9dl6\" (UniqueName: \"kubernetes.io/projected/6b4d3b77-9014-4eec-96e1-c31df74e6a14-kube-api-access-l9dl6\") pod \"nova-scheduler-0\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " pod="openstack/nova-scheduler-0" Oct 06 12:05:48 crc kubenswrapper[4698]: I1006 12:05:48.739394 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:05:49 crc kubenswrapper[4698]: I1006 12:05:49.069193 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a","Type":"ContainerStarted","Data":"013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef"} Oct 06 12:05:49 crc kubenswrapper[4698]: I1006 12:05:49.070741 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a","Type":"ContainerStarted","Data":"949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8"} Oct 06 12:05:49 crc kubenswrapper[4698]: I1006 12:05:49.070826 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a","Type":"ContainerStarted","Data":"c1358257c1900a79e7da47c6970ba484ddde8550c19edc149e8fac6c3eb16ae2"} Oct 06 12:05:49 crc kubenswrapper[4698]: I1006 12:05:49.072233 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fdb56a27-9290-42b9-9936-6de34abca79c","Type":"ContainerStarted","Data":"7b33df8533c063539e6a4ad31326fe3362af0b75e95966a1658b187e8bdcb8f2"} Oct 06 12:05:49 crc kubenswrapper[4698]: I1006 12:05:49.072324 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fdb56a27-9290-42b9-9936-6de34abca79c","Type":"ContainerStarted","Data":"a32d1c67394774fbf5542d1862d22a4b6db0591089d51fe54cc2462bc151a283"} Oct 06 12:05:49 crc kubenswrapper[4698]: I1006 12:05:49.072407 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:49 crc kubenswrapper[4698]: I1006 12:05:49.119292 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.119266958 podStartE2EDuration="2.119266958s" podCreationTimestamp="2025-10-06 12:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:49.096552923 +0000 UTC m=+1236.509245086" watchObservedRunningTime="2025-10-06 12:05:49.119266958 +0000 UTC m=+1236.531959131" Oct 06 12:05:49 crc kubenswrapper[4698]: I1006 12:05:49.125129 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.125107898 podStartE2EDuration="2.125107898s" podCreationTimestamp="2025-10-06 12:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:49.123958441 +0000 UTC m=+1236.536650614" watchObservedRunningTime="2025-10-06 12:05:49.125107898 +0000 UTC m=+1236.537800061" Oct 06 12:05:49 crc kubenswrapper[4698]: W1006 12:05:49.258503 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b4d3b77_9014_4eec_96e1_c31df74e6a14.slice/crio-085c0d7a27aa8809583f7e45e90a1040f0a222dc5f171733eb4b1f6d4c873e98 WatchSource:0}: Error finding container 085c0d7a27aa8809583f7e45e90a1040f0a222dc5f171733eb4b1f6d4c873e98: Status 404 returned error can't find the container with id 085c0d7a27aa8809583f7e45e90a1040f0a222dc5f171733eb4b1f6d4c873e98 Oct 06 12:05:49 crc kubenswrapper[4698]: I1006 12:05:49.263624 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:05:49 crc kubenswrapper[4698]: I1006 12:05:49.361818 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e5053d-2e18-4ba7-ae0b-b426e0127ad0" path="/var/lib/kubelet/pods/46e5053d-2e18-4ba7-ae0b-b426e0127ad0/volumes" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.101154 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b4d3b77-9014-4eec-96e1-c31df74e6a14","Type":"ContainerStarted","Data":"7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413"} Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.101245 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b4d3b77-9014-4eec-96e1-c31df74e6a14","Type":"ContainerStarted","Data":"085c0d7a27aa8809583f7e45e90a1040f0a222dc5f171733eb4b1f6d4c873e98"} Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.104770 4698 generic.go:334] "Generic (PLEG): container finished" podID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerID="c28c4c51da742f5427d5f2ff1a4aac7320a12f96d6774923c5fbbbcab51acc90" exitCode=0 Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.105346 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74","Type":"ContainerDied","Data":"c28c4c51da742f5427d5f2ff1a4aac7320a12f96d6774923c5fbbbcab51acc90"} Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.105385 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74","Type":"ContainerDied","Data":"e186765c8d206226749b38f44cd5bf95d54f49a4951bc1b168e4a01fd0e00adb"} Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.105397 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e186765c8d206226749b38f44cd5bf95d54f49a4951bc1b168e4a01fd0e00adb" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.127825 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.127796231 podStartE2EDuration="2.127796231s" podCreationTimestamp="2025-10-06 12:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:50.117433952 +0000 UTC m=+1237.530126125" watchObservedRunningTime="2025-10-06 12:05:50.127796231 +0000 UTC m=+1237.540488404" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.158246 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.228602 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmd2c\" (UniqueName: \"kubernetes.io/projected/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-kube-api-access-gmd2c\") pod \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.228833 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-config-data\") pod \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.228876 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-logs\") pod \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.228963 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-combined-ca-bundle\") pod \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\" (UID: \"fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74\") " Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.229705 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-logs" (OuterVolumeSpecName: "logs") pod "fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" (UID: "fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.236590 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-kube-api-access-gmd2c" (OuterVolumeSpecName: "kube-api-access-gmd2c") pod "fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" (UID: "fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74"). InnerVolumeSpecName "kube-api-access-gmd2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.265039 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-config-data" (OuterVolumeSpecName: "config-data") pod "fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" (UID: "fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.279513 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" (UID: "fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.331847 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.331884 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.331894 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:50 crc kubenswrapper[4698]: I1006 12:05:50.331910 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmd2c\" (UniqueName: \"kubernetes.io/projected/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74-kube-api-access-gmd2c\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.115681 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.164745 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.175168 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.216090 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:51 crc kubenswrapper[4698]: E1006 12:05:51.216652 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerName="nova-api-api" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.216675 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerName="nova-api-api" Oct 06 12:05:51 crc kubenswrapper[4698]: E1006 12:05:51.216704 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerName="nova-api-log" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.216711 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerName="nova-api-log" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.216932 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerName="nova-api-api" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.216962 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" containerName="nova-api-log" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.218223 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.221612 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.251123 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.342675 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74" path="/var/lib/kubelet/pods/fcd2a6a4-b040-4cc1-bc76-00ae3c0a1d74/volumes" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.354097 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-config-data\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.354173 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-logs\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.354200 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45cnt\" (UniqueName: \"kubernetes.io/projected/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-kube-api-access-45cnt\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.354264 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.456097 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.456236 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-config-data\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.456287 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-logs\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.456311 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45cnt\" (UniqueName: \"kubernetes.io/projected/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-kube-api-access-45cnt\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.457203 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-logs\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.462407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.462708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-config-data\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.482665 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45cnt\" (UniqueName: \"kubernetes.io/projected/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-kube-api-access-45cnt\") pod \"nova-api-0\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " pod="openstack/nova-api-0" Oct 06 12:05:51 crc kubenswrapper[4698]: I1006 12:05:51.560611 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:05:52 crc kubenswrapper[4698]: W1006 12:05:52.081319 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d19ad1b_280b_4ef2_a6d7_b626a19a94f2.slice/crio-afc76fffd6b430c32ecd5799d8acda1ad54f008fae4d5ce72999e44ef469d8ba WatchSource:0}: Error finding container afc76fffd6b430c32ecd5799d8acda1ad54f008fae4d5ce72999e44ef469d8ba: Status 404 returned error can't find the container with id afc76fffd6b430c32ecd5799d8acda1ad54f008fae4d5ce72999e44ef469d8ba Oct 06 12:05:52 crc kubenswrapper[4698]: I1006 12:05:52.083910 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:05:52 crc kubenswrapper[4698]: I1006 12:05:52.128639 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2","Type":"ContainerStarted","Data":"afc76fffd6b430c32ecd5799d8acda1ad54f008fae4d5ce72999e44ef469d8ba"} Oct 06 12:05:52 crc kubenswrapper[4698]: I1006 12:05:52.481450 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 12:05:52 crc kubenswrapper[4698]: I1006 12:05:52.481520 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 12:05:52 crc kubenswrapper[4698]: I1006 12:05:52.962253 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:52 crc kubenswrapper[4698]: I1006 12:05:52.963246 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="258084da-8b4a-484d-b10a-0511f89cac2f" containerName="kube-state-metrics" containerID="cri-o://825a8b03b342877d9c3baf904629d67aa6091402b56058dc43629fba007a7eb2" gracePeriod=30 Oct 06 12:05:53 crc kubenswrapper[4698]: I1006 12:05:53.194622 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2","Type":"ContainerStarted","Data":"4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745"} Oct 06 12:05:53 crc kubenswrapper[4698]: I1006 12:05:53.194673 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2","Type":"ContainerStarted","Data":"59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e"} Oct 06 12:05:53 crc kubenswrapper[4698]: I1006 12:05:53.203813 4698 generic.go:334] "Generic (PLEG): container finished" podID="258084da-8b4a-484d-b10a-0511f89cac2f" containerID="825a8b03b342877d9c3baf904629d67aa6091402b56058dc43629fba007a7eb2" exitCode=2 Oct 06 12:05:53 crc kubenswrapper[4698]: I1006 12:05:53.203859 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"258084da-8b4a-484d-b10a-0511f89cac2f","Type":"ContainerDied","Data":"825a8b03b342877d9c3baf904629d67aa6091402b56058dc43629fba007a7eb2"} Oct 06 12:05:53 crc kubenswrapper[4698]: I1006 12:05:53.739708 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 12:05:53 crc kubenswrapper[4698]: I1006 12:05:53.766245 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:05:53 crc kubenswrapper[4698]: I1006 12:05:53.789500 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.789473976 podStartE2EDuration="2.789473976s" podCreationTimestamp="2025-10-06 12:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:05:53.232829406 +0000 UTC m=+1240.645521579" watchObservedRunningTime="2025-10-06 12:05:53.789473976 +0000 UTC m=+1241.202166149" Oct 06 12:05:53 crc kubenswrapper[4698]: I1006 12:05:53.848136 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9787m\" (UniqueName: \"kubernetes.io/projected/258084da-8b4a-484d-b10a-0511f89cac2f-kube-api-access-9787m\") pod \"258084da-8b4a-484d-b10a-0511f89cac2f\" (UID: \"258084da-8b4a-484d-b10a-0511f89cac2f\") " Oct 06 12:05:53 crc kubenswrapper[4698]: I1006 12:05:53.869520 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258084da-8b4a-484d-b10a-0511f89cac2f-kube-api-access-9787m" (OuterVolumeSpecName: "kube-api-access-9787m") pod "258084da-8b4a-484d-b10a-0511f89cac2f" (UID: "258084da-8b4a-484d-b10a-0511f89cac2f"). InnerVolumeSpecName "kube-api-access-9787m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:53 crc kubenswrapper[4698]: I1006 12:05:53.951635 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9787m\" (UniqueName: \"kubernetes.io/projected/258084da-8b4a-484d-b10a-0511f89cac2f-kube-api-access-9787m\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.215772 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.215771 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"258084da-8b4a-484d-b10a-0511f89cac2f","Type":"ContainerDied","Data":"617f83fee5d389141e2392d272b076d68df66f74caacee931d1f202774cffc52"} Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.215879 4698 scope.go:117] "RemoveContainer" containerID="825a8b03b342877d9c3baf904629d67aa6091402b56058dc43629fba007a7eb2" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.252962 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.261512 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.300804 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:54 crc kubenswrapper[4698]: E1006 12:05:54.302125 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258084da-8b4a-484d-b10a-0511f89cac2f" containerName="kube-state-metrics" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.302152 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="258084da-8b4a-484d-b10a-0511f89cac2f" containerName="kube-state-metrics" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.302564 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="258084da-8b4a-484d-b10a-0511f89cac2f" containerName="kube-state-metrics" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.304623 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.311529 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.343379 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.347764 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.468418 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.468471 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.468592 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nknhd\" (UniqueName: \"kubernetes.io/projected/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-kube-api-access-nknhd\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.468631 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.571242 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nknhd\" (UniqueName: \"kubernetes.io/projected/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-kube-api-access-nknhd\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.571324 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.571450 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.571472 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.578976 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.579907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.581420 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.593285 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nknhd\" (UniqueName: \"kubernetes.io/projected/c2b3ac80-8153-430c-893a-21c4cc2f2a5d-kube-api-access-nknhd\") pod \"kube-state-metrics-0\" (UID: \"c2b3ac80-8153-430c-893a-21c4cc2f2a5d\") " pod="openstack/kube-state-metrics-0" Oct 06 12:05:54 crc kubenswrapper[4698]: I1006 12:05:54.660491 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 12:05:55 crc kubenswrapper[4698]: I1006 12:05:55.210557 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 12:05:55 crc kubenswrapper[4698]: I1006 12:05:55.273407 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c2b3ac80-8153-430c-893a-21c4cc2f2a5d","Type":"ContainerStarted","Data":"48dd549019d5248784b4db9c6cc0cb5be91f336e7bee707e4a134af0cdef3d6e"} Oct 06 12:05:55 crc kubenswrapper[4698]: I1006 12:05:55.358077 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="258084da-8b4a-484d-b10a-0511f89cac2f" path="/var/lib/kubelet/pods/258084da-8b4a-484d-b10a-0511f89cac2f/volumes" Oct 06 12:05:55 crc kubenswrapper[4698]: I1006 12:05:55.358734 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:55 crc kubenswrapper[4698]: I1006 12:05:55.359049 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="ceilometer-central-agent" containerID="cri-o://d6e02dbbe1634ea88f3b936f2bab56044ca0c10f1ba2667bb9e228fefcb78f17" gracePeriod=30 Oct 06 12:05:55 crc kubenswrapper[4698]: I1006 12:05:55.359452 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="proxy-httpd" containerID="cri-o://43e0f98635075d66a6441c1e9d0a92d29066ec02df18a1f0e6a36893286ade2f" gracePeriod=30 Oct 06 12:05:55 crc kubenswrapper[4698]: I1006 12:05:55.359499 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="sg-core" containerID="cri-o://da65acd0505197b0dfa12126fc5e30e36497f49d32ffa59fd1394adfef7f17ea" gracePeriod=30 Oct 06 12:05:55 crc kubenswrapper[4698]: I1006 12:05:55.359529 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="ceilometer-notification-agent" containerID="cri-o://5ead9debc282b2705dfe0d498dd2ab98beca622c7be67fa68001ddcf3dfd7e74" gracePeriod=30 Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.291547 4698 generic.go:334] "Generic (PLEG): container finished" podID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerID="43e0f98635075d66a6441c1e9d0a92d29066ec02df18a1f0e6a36893286ade2f" exitCode=0 Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.291904 4698 generic.go:334] "Generic (PLEG): container finished" podID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerID="da65acd0505197b0dfa12126fc5e30e36497f49d32ffa59fd1394adfef7f17ea" exitCode=2 Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.291914 4698 generic.go:334] "Generic (PLEG): container finished" podID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerID="5ead9debc282b2705dfe0d498dd2ab98beca622c7be67fa68001ddcf3dfd7e74" exitCode=0 Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.291923 4698 generic.go:334] "Generic (PLEG): container finished" podID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerID="d6e02dbbe1634ea88f3b936f2bab56044ca0c10f1ba2667bb9e228fefcb78f17" exitCode=0 Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.291644 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6","Type":"ContainerDied","Data":"43e0f98635075d66a6441c1e9d0a92d29066ec02df18a1f0e6a36893286ade2f"} Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.291979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6","Type":"ContainerDied","Data":"da65acd0505197b0dfa12126fc5e30e36497f49d32ffa59fd1394adfef7f17ea"} Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.292002 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6","Type":"ContainerDied","Data":"5ead9debc282b2705dfe0d498dd2ab98beca622c7be67fa68001ddcf3dfd7e74"} Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.292027 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6","Type":"ContainerDied","Data":"d6e02dbbe1634ea88f3b936f2bab56044ca0c10f1ba2667bb9e228fefcb78f17"} Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.778444 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.923391 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-run-httpd\") pod \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.923840 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-combined-ca-bundle\") pod \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.923888 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-log-httpd\") pod \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.923933 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqhxf\" (UniqueName: \"kubernetes.io/projected/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-kube-api-access-vqhxf\") pod \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.924238 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-config-data\") pod \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.924300 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-scripts\") pod \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.924327 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-sg-core-conf-yaml\") pod \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\" (UID: \"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6\") " Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.926264 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" (UID: "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.926611 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" (UID: "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.931742 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-kube-api-access-vqhxf" (OuterVolumeSpecName: "kube-api-access-vqhxf") pod "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" (UID: "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6"). InnerVolumeSpecName "kube-api-access-vqhxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:05:56 crc kubenswrapper[4698]: I1006 12:05:56.949820 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-scripts" (OuterVolumeSpecName: "scripts") pod "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" (UID: "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.009722 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" (UID: "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.028549 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.028584 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.028597 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.028609 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.028621 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqhxf\" (UniqueName: \"kubernetes.io/projected/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-kube-api-access-vqhxf\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.108055 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-config-data" (OuterVolumeSpecName: "config-data") pod "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" (UID: "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.129272 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" (UID: "0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.133750 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.133787 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.304540 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6","Type":"ContainerDied","Data":"a67c99f76355770bca5a157399eae56063ce9f7aed3ea8afba81e1c4348dc336"} Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.304590 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.304695 4698 scope.go:117] "RemoveContainer" containerID="43e0f98635075d66a6441c1e9d0a92d29066ec02df18a1f0e6a36893286ade2f" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.307060 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c2b3ac80-8153-430c-893a-21c4cc2f2a5d","Type":"ContainerStarted","Data":"95bd847a3a4f44e504581a328d460f75185ffc5b6a54b406e65211ab70a04e5d"} Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.307233 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.329718 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.950858537 podStartE2EDuration="3.329698173s" podCreationTimestamp="2025-10-06 12:05:54 +0000 UTC" firstStartedPulling="2025-10-06 12:05:55.215394913 +0000 UTC m=+1242.628087086" lastFinishedPulling="2025-10-06 12:05:56.594234549 +0000 UTC m=+1244.006926722" observedRunningTime="2025-10-06 12:05:57.328062314 +0000 UTC m=+1244.740754487" watchObservedRunningTime="2025-10-06 12:05:57.329698173 +0000 UTC m=+1244.742390346" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.338765 4698 scope.go:117] "RemoveContainer" containerID="da65acd0505197b0dfa12126fc5e30e36497f49d32ffa59fd1394adfef7f17ea" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.365522 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.387825 4698 scope.go:117] "RemoveContainer" containerID="5ead9debc282b2705dfe0d498dd2ab98beca622c7be67fa68001ddcf3dfd7e74" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.398773 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.410494 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:57 crc kubenswrapper[4698]: E1006 12:05:57.411103 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="sg-core" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.411121 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="sg-core" Oct 06 12:05:57 crc kubenswrapper[4698]: E1006 12:05:57.411151 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="proxy-httpd" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.411158 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="proxy-httpd" Oct 06 12:05:57 crc kubenswrapper[4698]: E1006 12:05:57.411181 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="ceilometer-notification-agent" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.411188 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="ceilometer-notification-agent" Oct 06 12:05:57 crc kubenswrapper[4698]: E1006 12:05:57.411211 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="ceilometer-central-agent" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.411218 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="ceilometer-central-agent" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.411414 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="ceilometer-central-agent" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.411434 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="proxy-httpd" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.411444 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="ceilometer-notification-agent" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.411462 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" containerName="sg-core" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.414761 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.416994 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.419277 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.419658 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.448130 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.448384 4698 scope.go:117] "RemoveContainer" containerID="d6e02dbbe1634ea88f3b936f2bab56044ca0c10f1ba2667bb9e228fefcb78f17" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.480629 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.480896 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.545142 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-scripts\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.545520 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-run-httpd\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.545625 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-log-httpd\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.545745 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-config-data\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.546145 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6fsp\" (UniqueName: \"kubernetes.io/projected/467ad7da-f676-475e-b946-5bdfc14e0df9-kube-api-access-p6fsp\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.546201 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.546293 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.546391 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.562070 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.649162 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-scripts\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.649379 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-run-httpd\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.649425 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-log-httpd\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.649458 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-config-data\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.649594 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6fsp\" (UniqueName: \"kubernetes.io/projected/467ad7da-f676-475e-b946-5bdfc14e0df9-kube-api-access-p6fsp\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.649641 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.649699 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.649742 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.650161 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-log-httpd\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.650541 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-run-httpd\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.655342 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.655726 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-scripts\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.655763 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.656551 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.658641 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-config-data\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.670178 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6fsp\" (UniqueName: \"kubernetes.io/projected/467ad7da-f676-475e-b946-5bdfc14e0df9-kube-api-access-p6fsp\") pod \"ceilometer-0\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " pod="openstack/ceilometer-0" Oct 06 12:05:57 crc kubenswrapper[4698]: I1006 12:05:57.754937 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:05:58 crc kubenswrapper[4698]: I1006 12:05:58.363325 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:05:58 crc kubenswrapper[4698]: I1006 12:05:58.497259 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:05:58 crc kubenswrapper[4698]: I1006 12:05:58.497259 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:05:58 crc kubenswrapper[4698]: I1006 12:05:58.740625 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 12:05:58 crc kubenswrapper[4698]: I1006 12:05:58.775197 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 12:05:59 crc kubenswrapper[4698]: I1006 12:05:59.342796 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6" path="/var/lib/kubelet/pods/0a8acbba-aba3-4b4f-a5bf-9c11f8fd31f6/volumes" Oct 06 12:05:59 crc kubenswrapper[4698]: I1006 12:05:59.344337 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467ad7da-f676-475e-b946-5bdfc14e0df9","Type":"ContainerStarted","Data":"54c266c51c2d611f5cb146073dd3ccc54d4ae5e4344a161ad5ae37341e546631"} Oct 06 12:05:59 crc kubenswrapper[4698]: I1006 12:05:59.344452 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467ad7da-f676-475e-b946-5bdfc14e0df9","Type":"ContainerStarted","Data":"bf3f74e7e050b9b0072a647749ffbe0f8c4daf256f29c83d7b420dfcd1eb0e5f"} Oct 06 12:05:59 crc kubenswrapper[4698]: I1006 12:05:59.381796 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 12:06:00 crc kubenswrapper[4698]: I1006 12:06:00.360003 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467ad7da-f676-475e-b946-5bdfc14e0df9","Type":"ContainerStarted","Data":"7905a3da2bfe010b5afcf33e6e1e411511b8a4affcf6bb35221d2fd80e650605"} Oct 06 12:06:01 crc kubenswrapper[4698]: I1006 12:06:01.390387 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467ad7da-f676-475e-b946-5bdfc14e0df9","Type":"ContainerStarted","Data":"cbd24fed508d5af95fea49d6be399fa78925fec0e082cc255bf9aec47c81d676"} Oct 06 12:06:01 crc kubenswrapper[4698]: I1006 12:06:01.561624 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:06:01 crc kubenswrapper[4698]: I1006 12:06:01.561691 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:06:02 crc kubenswrapper[4698]: I1006 12:06:02.606455 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:02 crc kubenswrapper[4698]: I1006 12:06:02.648441 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:03 crc kubenswrapper[4698]: I1006 12:06:03.426983 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467ad7da-f676-475e-b946-5bdfc14e0df9","Type":"ContainerStarted","Data":"5894bc4b453f9605f96cca797ccef75cd6f67138d19f19bf09d6de31580345dc"} Oct 06 12:06:03 crc kubenswrapper[4698]: I1006 12:06:03.428167 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:06:03 crc kubenswrapper[4698]: I1006 12:06:03.457234 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.370286734 podStartE2EDuration="6.457213532s" podCreationTimestamp="2025-10-06 12:05:57 +0000 UTC" firstStartedPulling="2025-10-06 12:05:58.369904627 +0000 UTC m=+1245.782596800" lastFinishedPulling="2025-10-06 12:06:02.456831385 +0000 UTC m=+1249.869523598" observedRunningTime="2025-10-06 12:06:03.449081847 +0000 UTC m=+1250.861774040" watchObservedRunningTime="2025-10-06 12:06:03.457213532 +0000 UTC m=+1250.869905705" Oct 06 12:06:04 crc kubenswrapper[4698]: I1006 12:06:04.681368 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 06 12:06:07 crc kubenswrapper[4698]: I1006 12:06:07.490657 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 12:06:07 crc kubenswrapper[4698]: I1006 12:06:07.495133 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 12:06:07 crc kubenswrapper[4698]: I1006 12:06:07.509128 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 12:06:08 crc kubenswrapper[4698]: I1006 12:06:08.510135 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.397432 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.517365 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-combined-ca-bundle\") pod \"5af883c8-8d15-48a6-8aab-3f648a484b79\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.517602 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkdvw\" (UniqueName: \"kubernetes.io/projected/5af883c8-8d15-48a6-8aab-3f648a484b79-kube-api-access-xkdvw\") pod \"5af883c8-8d15-48a6-8aab-3f648a484b79\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.517935 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-config-data\") pod \"5af883c8-8d15-48a6-8aab-3f648a484b79\" (UID: \"5af883c8-8d15-48a6-8aab-3f648a484b79\") " Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.528689 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af883c8-8d15-48a6-8aab-3f648a484b79-kube-api-access-xkdvw" (OuterVolumeSpecName: "kube-api-access-xkdvw") pod "5af883c8-8d15-48a6-8aab-3f648a484b79" (UID: "5af883c8-8d15-48a6-8aab-3f648a484b79"). InnerVolumeSpecName "kube-api-access-xkdvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.535948 4698 generic.go:334] "Generic (PLEG): container finished" podID="5af883c8-8d15-48a6-8aab-3f648a484b79" containerID="823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287" exitCode=137 Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.536063 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.536397 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5af883c8-8d15-48a6-8aab-3f648a484b79","Type":"ContainerDied","Data":"823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287"} Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.536500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5af883c8-8d15-48a6-8aab-3f648a484b79","Type":"ContainerDied","Data":"2aefccb2e6bbbd74fb61be2990a80491b0383d7f3f13edb8b09394b6d3a77ad9"} Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.536679 4698 scope.go:117] "RemoveContainer" containerID="823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.555076 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5af883c8-8d15-48a6-8aab-3f648a484b79" (UID: "5af883c8-8d15-48a6-8aab-3f648a484b79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.578534 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-config-data" (OuterVolumeSpecName: "config-data") pod "5af883c8-8d15-48a6-8aab-3f648a484b79" (UID: "5af883c8-8d15-48a6-8aab-3f648a484b79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.621131 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkdvw\" (UniqueName: \"kubernetes.io/projected/5af883c8-8d15-48a6-8aab-3f648a484b79-kube-api-access-xkdvw\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.621168 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.621399 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af883c8-8d15-48a6-8aab-3f648a484b79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.642751 4698 scope.go:117] "RemoveContainer" containerID="823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287" Oct 06 12:06:10 crc kubenswrapper[4698]: E1006 12:06:10.643298 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287\": container with ID starting with 823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287 not found: ID does not exist" containerID="823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.643344 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287"} err="failed to get container status \"823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287\": rpc error: code = NotFound desc = could not find container \"823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287\": container with ID starting with 823a453055e7e3f9ec3a325a70a0b1616df577bbbd2fbc939126146fcc63d287 not found: ID does not exist" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.906486 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.926593 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.940091 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:06:10 crc kubenswrapper[4698]: E1006 12:06:10.940799 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af883c8-8d15-48a6-8aab-3f648a484b79" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.940826 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af883c8-8d15-48a6-8aab-3f648a484b79" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.941166 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af883c8-8d15-48a6-8aab-3f648a484b79" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.942226 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.947119 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.947359 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.947508 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 06 12:06:10 crc kubenswrapper[4698]: I1006 12:06:10.952997 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.032340 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.032567 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.032610 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ht9p\" (UniqueName: \"kubernetes.io/projected/f0828613-cf15-40b5-9af1-c13b856373bd-kube-api-access-8ht9p\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.032635 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.032664 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.134747 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.134894 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.134931 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ht9p\" (UniqueName: \"kubernetes.io/projected/f0828613-cf15-40b5-9af1-c13b856373bd-kube-api-access-8ht9p\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.134956 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.134984 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.140345 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.140842 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.145062 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.145376 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0828613-cf15-40b5-9af1-c13b856373bd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.158837 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ht9p\" (UniqueName: \"kubernetes.io/projected/f0828613-cf15-40b5-9af1-c13b856373bd-kube-api-access-8ht9p\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0828613-cf15-40b5-9af1-c13b856373bd\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.272993 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.356005 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af883c8-8d15-48a6-8aab-3f648a484b79" path="/var/lib/kubelet/pods/5af883c8-8d15-48a6-8aab-3f648a484b79/volumes" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.570632 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.572273 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.579274 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.587411 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 12:06:11 crc kubenswrapper[4698]: I1006 12:06:11.816823 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 12:06:12 crc kubenswrapper[4698]: I1006 12:06:12.585345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f0828613-cf15-40b5-9af1-c13b856373bd","Type":"ContainerStarted","Data":"b7f4a94b10faee80d5073a3985a709a2e0b4fe811484163e0e629c2e33ba4d35"} Oct 06 12:06:12 crc kubenswrapper[4698]: I1006 12:06:12.586058 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f0828613-cf15-40b5-9af1-c13b856373bd","Type":"ContainerStarted","Data":"829dc4c6bb54821c85992f0f388befda6366abaa4cc5ffc9c519c88e278f4cda"} Oct 06 12:06:12 crc kubenswrapper[4698]: I1006 12:06:12.586501 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 12:06:12 crc kubenswrapper[4698]: I1006 12:06:12.594999 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 12:06:12 crc kubenswrapper[4698]: I1006 12:06:12.615729 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.6157039859999998 podStartE2EDuration="2.615703986s" podCreationTimestamp="2025-10-06 12:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:12.610606144 +0000 UTC m=+1260.023298397" watchObservedRunningTime="2025-10-06 12:06:12.615703986 +0000 UTC m=+1260.028396159" Oct 06 12:06:12 crc kubenswrapper[4698]: I1006 12:06:12.842174 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-p7h28"] Oct 06 12:06:12 crc kubenswrapper[4698]: I1006 12:06:12.849049 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:12 crc kubenswrapper[4698]: I1006 12:06:12.868198 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-p7h28"] Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.013915 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqg8\" (UniqueName: \"kubernetes.io/projected/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-kube-api-access-zpqg8\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.014051 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.014102 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.014144 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.014174 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-config\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.014311 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.116717 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.117143 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.117306 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.117364 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-config\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.117843 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.117862 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.118128 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqg8\" (UniqueName: \"kubernetes.io/projected/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-kube-api-access-zpqg8\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.118333 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.118714 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.119109 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.119405 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-config\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.139891 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqg8\" (UniqueName: \"kubernetes.io/projected/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-kube-api-access-zpqg8\") pod \"dnsmasq-dns-59cf4bdb65-p7h28\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.188625 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:13 crc kubenswrapper[4698]: I1006 12:06:13.766063 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-p7h28"] Oct 06 12:06:14 crc kubenswrapper[4698]: I1006 12:06:14.607044 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" event={"ID":"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38","Type":"ContainerStarted","Data":"e29c1c3303a857cd21b05ecf2db1d5fc5b1e16163bac9138f977945b3a5df1f0"} Oct 06 12:06:17 crc kubenswrapper[4698]: E1006 12:06:17.532664 4698 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.204s" Oct 06 12:06:17 crc kubenswrapper[4698]: I1006 12:06:17.533542 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:17 crc kubenswrapper[4698]: I1006 12:06:17.533563 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:17 crc kubenswrapper[4698]: I1006 12:06:17.537075 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-log" containerID="cri-o://59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e" gracePeriod=30 Oct 06 12:06:17 crc kubenswrapper[4698]: I1006 12:06:17.537815 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-api" containerID="cri-o://4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745" gracePeriod=30 Oct 06 12:06:17 crc kubenswrapper[4698]: I1006 12:06:17.650435 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" event={"ID":"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38","Type":"ContainerStarted","Data":"4ddbfd4a42f2806b2d3b34a8e0599067c233e796a9421b143d6031a87d37a4a2"} Oct 06 12:06:18 crc kubenswrapper[4698]: I1006 12:06:18.669044 4698 generic.go:334] "Generic (PLEG): container finished" podID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerID="59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e" exitCode=143 Oct 06 12:06:18 crc kubenswrapper[4698]: I1006 12:06:18.669441 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2","Type":"ContainerDied","Data":"59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e"} Oct 06 12:06:18 crc kubenswrapper[4698]: I1006 12:06:18.674730 4698 generic.go:334] "Generic (PLEG): container finished" podID="bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" containerID="4ddbfd4a42f2806b2d3b34a8e0599067c233e796a9421b143d6031a87d37a4a2" exitCode=0 Oct 06 12:06:18 crc kubenswrapper[4698]: I1006 12:06:18.674790 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" event={"ID":"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38","Type":"ContainerDied","Data":"4ddbfd4a42f2806b2d3b34a8e0599067c233e796a9421b143d6031a87d37a4a2"} Oct 06 12:06:19 crc kubenswrapper[4698]: I1006 12:06:19.702190 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" event={"ID":"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38","Type":"ContainerStarted","Data":"857b9ed6758cc01ba25edf1d2dfe5f527e7a380c3c07747a7968d4757e626f22"} Oct 06 12:06:19 crc kubenswrapper[4698]: I1006 12:06:19.702953 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:19 crc kubenswrapper[4698]: I1006 12:06:19.732367 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" podStartSLOduration=7.732342161 podStartE2EDuration="7.732342161s" podCreationTimestamp="2025-10-06 12:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:19.722581567 +0000 UTC m=+1267.135273770" watchObservedRunningTime="2025-10-06 12:06:19.732342161 +0000 UTC m=+1267.145034334" Oct 06 12:06:19 crc kubenswrapper[4698]: I1006 12:06:19.883711 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:19 crc kubenswrapper[4698]: I1006 12:06:19.885242 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="sg-core" containerID="cri-o://cbd24fed508d5af95fea49d6be399fa78925fec0e082cc255bf9aec47c81d676" gracePeriod=30 Oct 06 12:06:19 crc kubenswrapper[4698]: I1006 12:06:19.885420 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="proxy-httpd" containerID="cri-o://5894bc4b453f9605f96cca797ccef75cd6f67138d19f19bf09d6de31580345dc" gracePeriod=30 Oct 06 12:06:19 crc kubenswrapper[4698]: I1006 12:06:19.885646 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="ceilometer-central-agent" containerID="cri-o://54c266c51c2d611f5cb146073dd3ccc54d4ae5e4344a161ad5ae37341e546631" gracePeriod=30 Oct 06 12:06:19 crc kubenswrapper[4698]: I1006 12:06:19.885636 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="ceilometer-notification-agent" containerID="cri-o://7905a3da2bfe010b5afcf33e6e1e411511b8a4affcf6bb35221d2fd80e650605" gracePeriod=30 Oct 06 12:06:19 crc kubenswrapper[4698]: I1006 12:06:19.904325 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.217:3000/\": read tcp 10.217.0.2:44176->10.217.0.217:3000: read: connection reset by peer" Oct 06 12:06:20 crc kubenswrapper[4698]: I1006 12:06:20.721579 4698 generic.go:334] "Generic (PLEG): container finished" podID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerID="5894bc4b453f9605f96cca797ccef75cd6f67138d19f19bf09d6de31580345dc" exitCode=0 Oct 06 12:06:20 crc kubenswrapper[4698]: I1006 12:06:20.722190 4698 generic.go:334] "Generic (PLEG): container finished" podID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerID="cbd24fed508d5af95fea49d6be399fa78925fec0e082cc255bf9aec47c81d676" exitCode=2 Oct 06 12:06:20 crc kubenswrapper[4698]: I1006 12:06:20.722213 4698 generic.go:334] "Generic (PLEG): container finished" podID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerID="54c266c51c2d611f5cb146073dd3ccc54d4ae5e4344a161ad5ae37341e546631" exitCode=0 Oct 06 12:06:20 crc kubenswrapper[4698]: I1006 12:06:20.721685 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467ad7da-f676-475e-b946-5bdfc14e0df9","Type":"ContainerDied","Data":"5894bc4b453f9605f96cca797ccef75cd6f67138d19f19bf09d6de31580345dc"} Oct 06 12:06:20 crc kubenswrapper[4698]: I1006 12:06:20.723006 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467ad7da-f676-475e-b946-5bdfc14e0df9","Type":"ContainerDied","Data":"cbd24fed508d5af95fea49d6be399fa78925fec0e082cc255bf9aec47c81d676"} Oct 06 12:06:20 crc kubenswrapper[4698]: I1006 12:06:20.723069 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467ad7da-f676-475e-b946-5bdfc14e0df9","Type":"ContainerDied","Data":"54c266c51c2d611f5cb146073dd3ccc54d4ae5e4344a161ad5ae37341e546631"} Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.273448 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.352203 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.746580 4698 generic.go:334] "Generic (PLEG): container finished" podID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerID="7905a3da2bfe010b5afcf33e6e1e411511b8a4affcf6bb35221d2fd80e650605" exitCode=0 Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.746699 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467ad7da-f676-475e-b946-5bdfc14e0df9","Type":"ContainerDied","Data":"7905a3da2bfe010b5afcf33e6e1e411511b8a4affcf6bb35221d2fd80e650605"} Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.766636 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.954403 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qcgff"] Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.960410 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.962498 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.964089 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.966687 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.969159 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qcgff"] Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.992027 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": read tcp 10.217.0.2:59736->10.217.0.215:8774: read: connection reset by peer" Oct 06 12:06:21 crc kubenswrapper[4698]: I1006 12:06:21.992173 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": read tcp 10.217.0.2:59738->10.217.0.215:8774: read: connection reset by peer" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.055945 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-ceilometer-tls-certs\") pod \"467ad7da-f676-475e-b946-5bdfc14e0df9\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.056118 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-sg-core-conf-yaml\") pod \"467ad7da-f676-475e-b946-5bdfc14e0df9\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.056169 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-run-httpd\") pod \"467ad7da-f676-475e-b946-5bdfc14e0df9\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.056324 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-config-data\") pod \"467ad7da-f676-475e-b946-5bdfc14e0df9\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.056403 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-log-httpd\") pod \"467ad7da-f676-475e-b946-5bdfc14e0df9\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.056529 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-scripts\") pod \"467ad7da-f676-475e-b946-5bdfc14e0df9\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.056638 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-combined-ca-bundle\") pod \"467ad7da-f676-475e-b946-5bdfc14e0df9\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.056790 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6fsp\" (UniqueName: \"kubernetes.io/projected/467ad7da-f676-475e-b946-5bdfc14e0df9-kube-api-access-p6fsp\") pod \"467ad7da-f676-475e-b946-5bdfc14e0df9\" (UID: \"467ad7da-f676-475e-b946-5bdfc14e0df9\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.057631 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "467ad7da-f676-475e-b946-5bdfc14e0df9" (UID: "467ad7da-f676-475e-b946-5bdfc14e0df9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.057832 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "467ad7da-f676-475e-b946-5bdfc14e0df9" (UID: "467ad7da-f676-475e-b946-5bdfc14e0df9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.058008 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.058081 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvvmh\" (UniqueName: \"kubernetes.io/projected/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-kube-api-access-hvvmh\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.058193 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-scripts\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.058278 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-config-data\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.058393 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.058413 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467ad7da-f676-475e-b946-5bdfc14e0df9-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.068534 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467ad7da-f676-475e-b946-5bdfc14e0df9-kube-api-access-p6fsp" (OuterVolumeSpecName: "kube-api-access-p6fsp") pod "467ad7da-f676-475e-b946-5bdfc14e0df9" (UID: "467ad7da-f676-475e-b946-5bdfc14e0df9"). InnerVolumeSpecName "kube-api-access-p6fsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.068539 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-scripts" (OuterVolumeSpecName: "scripts") pod "467ad7da-f676-475e-b946-5bdfc14e0df9" (UID: "467ad7da-f676-475e-b946-5bdfc14e0df9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.122181 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "467ad7da-f676-475e-b946-5bdfc14e0df9" (UID: "467ad7da-f676-475e-b946-5bdfc14e0df9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.146817 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "467ad7da-f676-475e-b946-5bdfc14e0df9" (UID: "467ad7da-f676-475e-b946-5bdfc14e0df9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.162358 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-scripts\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.162480 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-config-data\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.162632 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.162679 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvvmh\" (UniqueName: \"kubernetes.io/projected/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-kube-api-access-hvvmh\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.162807 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.162831 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6fsp\" (UniqueName: \"kubernetes.io/projected/467ad7da-f676-475e-b946-5bdfc14e0df9-kube-api-access-p6fsp\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.162846 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.162858 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.167902 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-config-data\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.172254 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-scripts\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.172745 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.189706 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvvmh\" (UniqueName: \"kubernetes.io/projected/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-kube-api-access-hvvmh\") pod \"nova-cell1-cell-mapping-qcgff\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.196287 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "467ad7da-f676-475e-b946-5bdfc14e0df9" (UID: "467ad7da-f676-475e-b946-5bdfc14e0df9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.235249 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-config-data" (OuterVolumeSpecName: "config-data") pod "467ad7da-f676-475e-b946-5bdfc14e0df9" (UID: "467ad7da-f676-475e-b946-5bdfc14e0df9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.265316 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.265365 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467ad7da-f676-475e-b946-5bdfc14e0df9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.277799 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.696573 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.769994 4698 generic.go:334] "Generic (PLEG): container finished" podID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerID="4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745" exitCode=0 Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.770086 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2","Type":"ContainerDied","Data":"4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745"} Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.770119 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2","Type":"ContainerDied","Data":"afc76fffd6b430c32ecd5799d8acda1ad54f008fae4d5ce72999e44ef469d8ba"} Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.770137 4698 scope.go:117] "RemoveContainer" containerID="4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.770282 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.776746 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.776955 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467ad7da-f676-475e-b946-5bdfc14e0df9","Type":"ContainerDied","Data":"bf3f74e7e050b9b0072a647749ffbe0f8c4daf256f29c83d7b420dfcd1eb0e5f"} Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.778028 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-config-data\") pod \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.778070 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45cnt\" (UniqueName: \"kubernetes.io/projected/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-kube-api-access-45cnt\") pod \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.778170 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-logs\") pod \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.778283 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-combined-ca-bundle\") pod \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\" (UID: \"7d19ad1b-280b-4ef2-a6d7-b626a19a94f2\") " Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.782648 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-logs" (OuterVolumeSpecName: "logs") pod "7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" (UID: "7d19ad1b-280b-4ef2-a6d7-b626a19a94f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.789141 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-kube-api-access-45cnt" (OuterVolumeSpecName: "kube-api-access-45cnt") pod "7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" (UID: "7d19ad1b-280b-4ef2-a6d7-b626a19a94f2"). InnerVolumeSpecName "kube-api-access-45cnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.825273 4698 scope.go:117] "RemoveContainer" containerID="59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.832720 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" (UID: "7d19ad1b-280b-4ef2-a6d7-b626a19a94f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.847838 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.847941 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-config-data" (OuterVolumeSpecName: "config-data") pod "7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" (UID: "7d19ad1b-280b-4ef2-a6d7-b626a19a94f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.858745 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.872177 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:22 crc kubenswrapper[4698]: E1006 12:06:22.872708 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="ceilometer-central-agent" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.872728 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="ceilometer-central-agent" Oct 06 12:06:22 crc kubenswrapper[4698]: E1006 12:06:22.872741 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="proxy-httpd" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.872748 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="proxy-httpd" Oct 06 12:06:22 crc kubenswrapper[4698]: E1006 12:06:22.872761 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-log" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.872766 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-log" Oct 06 12:06:22 crc kubenswrapper[4698]: E1006 12:06:22.872794 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="ceilometer-notification-agent" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.872800 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="ceilometer-notification-agent" Oct 06 12:06:22 crc kubenswrapper[4698]: E1006 12:06:22.872816 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-api" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.872821 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-api" Oct 06 12:06:22 crc kubenswrapper[4698]: E1006 12:06:22.872849 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="sg-core" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.872855 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="sg-core" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.873085 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="sg-core" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.873103 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-log" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.873113 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" containerName="nova-api-api" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.873136 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="ceilometer-notification-agent" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.873146 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="proxy-httpd" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.873153 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" containerName="ceilometer-central-agent" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.878372 4698 scope.go:117] "RemoveContainer" containerID="4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.881481 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.882895 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.882996 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45cnt\" (UniqueName: \"kubernetes.io/projected/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-kube-api-access-45cnt\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.883069 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:22 crc kubenswrapper[4698]: E1006 12:06:22.882750 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745\": container with ID starting with 4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745 not found: ID does not exist" containerID="4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.883207 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745"} err="failed to get container status \"4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745\": rpc error: code = NotFound desc = could not find container \"4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745\": container with ID starting with 4055946154a09d402e20d5d89081eeca2789d9d8952eb32218704f24e9c2c745 not found: ID does not exist" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.883307 4698 scope.go:117] "RemoveContainer" containerID="59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.883384 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:22 crc kubenswrapper[4698]: E1006 12:06:22.885298 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e\": container with ID starting with 59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e not found: ID does not exist" containerID="59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.885364 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e"} err="failed to get container status \"59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e\": rpc error: code = NotFound desc = could not find container \"59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e\": container with ID starting with 59156258b55a21f928ebccafe4592194c840bfbae22bbeb7a93466f851f27e8e not found: ID does not exist" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.885391 4698 scope.go:117] "RemoveContainer" containerID="5894bc4b453f9605f96cca797ccef75cd6f67138d19f19bf09d6de31580345dc" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.885779 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.885981 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.886382 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.891413 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:22 crc kubenswrapper[4698]: W1006 12:06:22.951357 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec80afd9_0c75_4270_9b9a_c9f0380a3a86.slice/crio-fb95f45ebfbfd208c0d02be3fcd651ee5f3c1ee017a6a8b49b821ecad0f84a08 WatchSource:0}: Error finding container fb95f45ebfbfd208c0d02be3fcd651ee5f3c1ee017a6a8b49b821ecad0f84a08: Status 404 returned error can't find the container with id fb95f45ebfbfd208c0d02be3fcd651ee5f3c1ee017a6a8b49b821ecad0f84a08 Oct 06 12:06:22 crc kubenswrapper[4698]: I1006 12:06:22.997256 4698 scope.go:117] "RemoveContainer" containerID="cbd24fed508d5af95fea49d6be399fa78925fec0e082cc255bf9aec47c81d676" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:22.999993 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71e624a3-d6ee-458b-be82-fcc805fbc29b-log-httpd\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.000121 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66qlq\" (UniqueName: \"kubernetes.io/projected/71e624a3-d6ee-458b-be82-fcc805fbc29b-kube-api-access-66qlq\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.000219 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.002784 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qcgff"] Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.009376 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71e624a3-d6ee-458b-be82-fcc805fbc29b-run-httpd\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.009692 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-scripts\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.010651 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-config-data\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.010680 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.010827 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.037000 4698 scope.go:117] "RemoveContainer" containerID="7905a3da2bfe010b5afcf33e6e1e411511b8a4affcf6bb35221d2fd80e650605" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.079157 4698 scope.go:117] "RemoveContainer" containerID="54c266c51c2d611f5cb146073dd3ccc54d4ae5e4344a161ad5ae37341e546631" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.123100 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71e624a3-d6ee-458b-be82-fcc805fbc29b-log-httpd\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.123421 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66qlq\" (UniqueName: \"kubernetes.io/projected/71e624a3-d6ee-458b-be82-fcc805fbc29b-kube-api-access-66qlq\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.123479 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71e624a3-d6ee-458b-be82-fcc805fbc29b-log-httpd\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.123860 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.124161 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71e624a3-d6ee-458b-be82-fcc805fbc29b-run-httpd\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.124379 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71e624a3-d6ee-458b-be82-fcc805fbc29b-run-httpd\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.124899 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-scripts\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.124972 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-config-data\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.124993 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.125040 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.128650 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.128893 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-scripts\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.130842 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-config-data\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.132708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.133216 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71e624a3-d6ee-458b-be82-fcc805fbc29b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.161318 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66qlq\" (UniqueName: \"kubernetes.io/projected/71e624a3-d6ee-458b-be82-fcc805fbc29b-kube-api-access-66qlq\") pod \"ceilometer-0\" (UID: \"71e624a3-d6ee-458b-be82-fcc805fbc29b\") " pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.173336 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.183880 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.204132 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.205984 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.215593 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.216539 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.216810 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.218562 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.224304 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.338647 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-public-tls-certs\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.339177 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f90972-b5e6-4488-929f-86d580cf0c69-logs\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.339210 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.339228 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxxw\" (UniqueName: \"kubernetes.io/projected/70f90972-b5e6-4488-929f-86d580cf0c69-kube-api-access-fjxxw\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.339294 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-config-data\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.339340 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-internal-tls-certs\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.361878 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467ad7da-f676-475e-b946-5bdfc14e0df9" path="/var/lib/kubelet/pods/467ad7da-f676-475e-b946-5bdfc14e0df9/volumes" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.362670 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d19ad1b-280b-4ef2-a6d7-b626a19a94f2" path="/var/lib/kubelet/pods/7d19ad1b-280b-4ef2-a6d7-b626a19a94f2/volumes" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.442059 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-public-tls-certs\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.442184 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f90972-b5e6-4488-929f-86d580cf0c69-logs\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.442219 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.442242 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjxxw\" (UniqueName: \"kubernetes.io/projected/70f90972-b5e6-4488-929f-86d580cf0c69-kube-api-access-fjxxw\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.442286 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-config-data\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.442332 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-internal-tls-certs\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.443224 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f90972-b5e6-4488-929f-86d580cf0c69-logs\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.453468 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-public-tls-certs\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.453509 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-internal-tls-certs\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.455434 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-config-data\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.457784 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.460245 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjxxw\" (UniqueName: \"kubernetes.io/projected/70f90972-b5e6-4488-929f-86d580cf0c69-kube-api-access-fjxxw\") pod \"nova-api-0\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.696068 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.794539 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.797382 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qcgff" event={"ID":"ec80afd9-0c75-4270-9b9a-c9f0380a3a86","Type":"ContainerStarted","Data":"27f205a545deab8cf7b769000f51c44b23c905ae882eac8c726f6c4bc6de7c42"} Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.797443 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qcgff" event={"ID":"ec80afd9-0c75-4270-9b9a-c9f0380a3a86","Type":"ContainerStarted","Data":"fb95f45ebfbfd208c0d02be3fcd651ee5f3c1ee017a6a8b49b821ecad0f84a08"} Oct 06 12:06:23 crc kubenswrapper[4698]: I1006 12:06:23.822525 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qcgff" podStartSLOduration=2.822502487 podStartE2EDuration="2.822502487s" podCreationTimestamp="2025-10-06 12:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:23.822160559 +0000 UTC m=+1271.234852732" watchObservedRunningTime="2025-10-06 12:06:23.822502487 +0000 UTC m=+1271.235194660" Oct 06 12:06:24 crc kubenswrapper[4698]: I1006 12:06:24.343053 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:24 crc kubenswrapper[4698]: I1006 12:06:24.820500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70f90972-b5e6-4488-929f-86d580cf0c69","Type":"ContainerStarted","Data":"4fd9c9bc2b5bd4a512899c1fc9b663e0e9994e2ba4714ff82cdcde849387a3de"} Oct 06 12:06:24 crc kubenswrapper[4698]: I1006 12:06:24.820984 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70f90972-b5e6-4488-929f-86d580cf0c69","Type":"ContainerStarted","Data":"5d20249d89d0d3a668c490b25fd976656e17325facfeb0bb337a07b3e9d60564"} Oct 06 12:06:24 crc kubenswrapper[4698]: I1006 12:06:24.824912 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71e624a3-d6ee-458b-be82-fcc805fbc29b","Type":"ContainerStarted","Data":"7493cd99ca28f77b27c4932669586f82fa9d77f402c13950a49106029b4791dc"} Oct 06 12:06:25 crc kubenswrapper[4698]: I1006 12:06:25.235037 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:06:25 crc kubenswrapper[4698]: I1006 12:06:25.235490 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:06:25 crc kubenswrapper[4698]: I1006 12:06:25.837444 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70f90972-b5e6-4488-929f-86d580cf0c69","Type":"ContainerStarted","Data":"6576b43a4b5269b3be0e29452340a2f2971604596dec66aa7055d831183b3ffd"} Oct 06 12:06:25 crc kubenswrapper[4698]: I1006 12:06:25.839622 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71e624a3-d6ee-458b-be82-fcc805fbc29b","Type":"ContainerStarted","Data":"68589edea0b8df185d1847c2da165951d3dc79df1baf3f4a7530ab8be7d47ef7"} Oct 06 12:06:25 crc kubenswrapper[4698]: I1006 12:06:25.874390 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.874366507 podStartE2EDuration="2.874366507s" podCreationTimestamp="2025-10-06 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:25.862034912 +0000 UTC m=+1273.274727095" watchObservedRunningTime="2025-10-06 12:06:25.874366507 +0000 UTC m=+1273.287058690" Oct 06 12:06:26 crc kubenswrapper[4698]: I1006 12:06:26.853769 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71e624a3-d6ee-458b-be82-fcc805fbc29b","Type":"ContainerStarted","Data":"884930b9f37aab7df372d0d32e1e96b65f077df11572be0246daf3e638db20db"} Oct 06 12:06:27 crc kubenswrapper[4698]: I1006 12:06:27.868163 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71e624a3-d6ee-458b-be82-fcc805fbc29b","Type":"ContainerStarted","Data":"4eeff526d7d47dfc807b5ec46c0cf4f7867c698771d55c207004c8ad48ae73a7"} Oct 06 12:06:28 crc kubenswrapper[4698]: I1006 12:06:28.190237 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:06:28 crc kubenswrapper[4698]: I1006 12:06:28.266111 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-ssjhh"] Oct 06 12:06:28 crc kubenswrapper[4698]: I1006 12:06:28.266430 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" podUID="3df91b94-ef4b-4b23-9401-159a50392bb8" containerName="dnsmasq-dns" containerID="cri-o://c59ec42cc78888aabc7cb973c252b84ff9936d5ff1c5bce055239ca54295c417" gracePeriod=10 Oct 06 12:06:28 crc kubenswrapper[4698]: I1006 12:06:28.621297 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" podUID="3df91b94-ef4b-4b23-9401-159a50392bb8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.209:5353: connect: connection refused" Oct 06 12:06:28 crc kubenswrapper[4698]: I1006 12:06:28.914992 4698 generic.go:334] "Generic (PLEG): container finished" podID="3df91b94-ef4b-4b23-9401-159a50392bb8" containerID="c59ec42cc78888aabc7cb973c252b84ff9936d5ff1c5bce055239ca54295c417" exitCode=0 Oct 06 12:06:28 crc kubenswrapper[4698]: I1006 12:06:28.915124 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" event={"ID":"3df91b94-ef4b-4b23-9401-159a50392bb8","Type":"ContainerDied","Data":"c59ec42cc78888aabc7cb973c252b84ff9936d5ff1c5bce055239ca54295c417"} Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.139501 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.316291 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-sb\") pod \"3df91b94-ef4b-4b23-9401-159a50392bb8\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.316391 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-swift-storage-0\") pod \"3df91b94-ef4b-4b23-9401-159a50392bb8\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.316481 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-nb\") pod \"3df91b94-ef4b-4b23-9401-159a50392bb8\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.316583 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-svc\") pod \"3df91b94-ef4b-4b23-9401-159a50392bb8\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.316632 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-config\") pod \"3df91b94-ef4b-4b23-9401-159a50392bb8\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.316748 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbpfd\" (UniqueName: \"kubernetes.io/projected/3df91b94-ef4b-4b23-9401-159a50392bb8-kube-api-access-hbpfd\") pod \"3df91b94-ef4b-4b23-9401-159a50392bb8\" (UID: \"3df91b94-ef4b-4b23-9401-159a50392bb8\") " Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.326538 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df91b94-ef4b-4b23-9401-159a50392bb8-kube-api-access-hbpfd" (OuterVolumeSpecName: "kube-api-access-hbpfd") pod "3df91b94-ef4b-4b23-9401-159a50392bb8" (UID: "3df91b94-ef4b-4b23-9401-159a50392bb8"). InnerVolumeSpecName "kube-api-access-hbpfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.407917 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-config" (OuterVolumeSpecName: "config") pod "3df91b94-ef4b-4b23-9401-159a50392bb8" (UID: "3df91b94-ef4b-4b23-9401-159a50392bb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.415845 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3df91b94-ef4b-4b23-9401-159a50392bb8" (UID: "3df91b94-ef4b-4b23-9401-159a50392bb8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.424836 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.424915 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3df91b94-ef4b-4b23-9401-159a50392bb8" (UID: "3df91b94-ef4b-4b23-9401-159a50392bb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.424870 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbpfd\" (UniqueName: \"kubernetes.io/projected/3df91b94-ef4b-4b23-9401-159a50392bb8-kube-api-access-hbpfd\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.424981 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.436369 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3df91b94-ef4b-4b23-9401-159a50392bb8" (UID: "3df91b94-ef4b-4b23-9401-159a50392bb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.454213 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3df91b94-ef4b-4b23-9401-159a50392bb8" (UID: "3df91b94-ef4b-4b23-9401-159a50392bb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.527702 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.527769 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.527783 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3df91b94-ef4b-4b23-9401-159a50392bb8-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.931044 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71e624a3-d6ee-458b-be82-fcc805fbc29b","Type":"ContainerStarted","Data":"c8679f330ae5b8e630ee5d38d19f9751cddafa35be14571b6a3428bb726db17c"} Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.933866 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" event={"ID":"3df91b94-ef4b-4b23-9401-159a50392bb8","Type":"ContainerDied","Data":"4bbdd591dcacb8f0bc6102c7887b04678755d5acf452ef26e8c140f39ce2adb1"} Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.934004 4698 scope.go:117] "RemoveContainer" containerID="c59ec42cc78888aabc7cb973c252b84ff9936d5ff1c5bce055239ca54295c417" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.934249 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-ssjhh" Oct 06 12:06:29 crc kubenswrapper[4698]: I1006 12:06:29.978822 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.451598155 podStartE2EDuration="7.978797557s" podCreationTimestamp="2025-10-06 12:06:22 +0000 UTC" firstStartedPulling="2025-10-06 12:06:23.823992653 +0000 UTC m=+1271.236684826" lastFinishedPulling="2025-10-06 12:06:28.351192055 +0000 UTC m=+1275.763884228" observedRunningTime="2025-10-06 12:06:29.961825019 +0000 UTC m=+1277.374517192" watchObservedRunningTime="2025-10-06 12:06:29.978797557 +0000 UTC m=+1277.391489730" Oct 06 12:06:30 crc kubenswrapper[4698]: I1006 12:06:30.004234 4698 scope.go:117] "RemoveContainer" containerID="88d72f5634ea9e0f80dc6be79c827cf404a823d63f204e0fc84324b6b38ad9f0" Oct 06 12:06:30 crc kubenswrapper[4698]: I1006 12:06:30.016052 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-ssjhh"] Oct 06 12:06:30 crc kubenswrapper[4698]: I1006 12:06:30.027417 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-ssjhh"] Oct 06 12:06:30 crc kubenswrapper[4698]: I1006 12:06:30.950996 4698 generic.go:334] "Generic (PLEG): container finished" podID="ec80afd9-0c75-4270-9b9a-c9f0380a3a86" containerID="27f205a545deab8cf7b769000f51c44b23c905ae882eac8c726f6c4bc6de7c42" exitCode=0 Oct 06 12:06:30 crc kubenswrapper[4698]: I1006 12:06:30.951068 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qcgff" event={"ID":"ec80afd9-0c75-4270-9b9a-c9f0380a3a86","Type":"ContainerDied","Data":"27f205a545deab8cf7b769000f51c44b23c905ae882eac8c726f6c4bc6de7c42"} Oct 06 12:06:30 crc kubenswrapper[4698]: I1006 12:06:30.951808 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 12:06:31 crc kubenswrapper[4698]: I1006 12:06:31.344661 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df91b94-ef4b-4b23-9401-159a50392bb8" path="/var/lib/kubelet/pods/3df91b94-ef4b-4b23-9401-159a50392bb8/volumes" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.445897 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.608053 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-combined-ca-bundle\") pod \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.608170 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvvmh\" (UniqueName: \"kubernetes.io/projected/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-kube-api-access-hvvmh\") pod \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.608211 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-scripts\") pod \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.608326 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-config-data\") pod \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\" (UID: \"ec80afd9-0c75-4270-9b9a-c9f0380a3a86\") " Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.614203 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-kube-api-access-hvvmh" (OuterVolumeSpecName: "kube-api-access-hvvmh") pod "ec80afd9-0c75-4270-9b9a-c9f0380a3a86" (UID: "ec80afd9-0c75-4270-9b9a-c9f0380a3a86"). InnerVolumeSpecName "kube-api-access-hvvmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.616471 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-scripts" (OuterVolumeSpecName: "scripts") pod "ec80afd9-0c75-4270-9b9a-c9f0380a3a86" (UID: "ec80afd9-0c75-4270-9b9a-c9f0380a3a86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.639579 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-config-data" (OuterVolumeSpecName: "config-data") pod "ec80afd9-0c75-4270-9b9a-c9f0380a3a86" (UID: "ec80afd9-0c75-4270-9b9a-c9f0380a3a86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.641142 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec80afd9-0c75-4270-9b9a-c9f0380a3a86" (UID: "ec80afd9-0c75-4270-9b9a-c9f0380a3a86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.710690 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.711178 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.711193 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvvmh\" (UniqueName: \"kubernetes.io/projected/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-kube-api-access-hvvmh\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.711203 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec80afd9-0c75-4270-9b9a-c9f0380a3a86-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.984124 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qcgff" event={"ID":"ec80afd9-0c75-4270-9b9a-c9f0380a3a86","Type":"ContainerDied","Data":"fb95f45ebfbfd208c0d02be3fcd651ee5f3c1ee017a6a8b49b821ecad0f84a08"} Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.984174 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb95f45ebfbfd208c0d02be3fcd651ee5f3c1ee017a6a8b49b821ecad0f84a08" Oct 06 12:06:32 crc kubenswrapper[4698]: I1006 12:06:32.984696 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qcgff" Oct 06 12:06:33 crc kubenswrapper[4698]: I1006 12:06:33.186215 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:33 crc kubenswrapper[4698]: I1006 12:06:33.186549 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="70f90972-b5e6-4488-929f-86d580cf0c69" containerName="nova-api-log" containerID="cri-o://4fd9c9bc2b5bd4a512899c1fc9b663e0e9994e2ba4714ff82cdcde849387a3de" gracePeriod=30 Oct 06 12:06:33 crc kubenswrapper[4698]: I1006 12:06:33.186636 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="70f90972-b5e6-4488-929f-86d580cf0c69" containerName="nova-api-api" containerID="cri-o://6576b43a4b5269b3be0e29452340a2f2971604596dec66aa7055d831183b3ffd" gracePeriod=30 Oct 06 12:06:33 crc kubenswrapper[4698]: I1006 12:06:33.227575 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:33 crc kubenswrapper[4698]: I1006 12:06:33.227997 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6b4d3b77-9014-4eec-96e1-c31df74e6a14" containerName="nova-scheduler-scheduler" containerID="cri-o://7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" gracePeriod=30 Oct 06 12:06:33 crc kubenswrapper[4698]: I1006 12:06:33.294504 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:33 crc kubenswrapper[4698]: I1006 12:06:33.294874 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerName="nova-metadata-log" containerID="cri-o://949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8" gracePeriod=30 Oct 06 12:06:33 crc kubenswrapper[4698]: I1006 12:06:33.296493 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerName="nova-metadata-metadata" containerID="cri-o://013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef" gracePeriod=30 Oct 06 12:06:33 crc kubenswrapper[4698]: E1006 12:06:33.742581 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 12:06:33 crc kubenswrapper[4698]: E1006 12:06:33.744303 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 12:06:33 crc kubenswrapper[4698]: E1006 12:06:33.745990 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 12:06:33 crc kubenswrapper[4698]: E1006 12:06:33.746056 4698 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6b4d3b77-9014-4eec-96e1-c31df74e6a14" containerName="nova-scheduler-scheduler" Oct 06 12:06:33 crc kubenswrapper[4698]: I1006 12:06:33.999322 4698 generic.go:334] "Generic (PLEG): container finished" podID="70f90972-b5e6-4488-929f-86d580cf0c69" containerID="6576b43a4b5269b3be0e29452340a2f2971604596dec66aa7055d831183b3ffd" exitCode=0 Oct 06 12:06:33 crc kubenswrapper[4698]: I1006 12:06:33.999703 4698 generic.go:334] "Generic (PLEG): container finished" podID="70f90972-b5e6-4488-929f-86d580cf0c69" containerID="4fd9c9bc2b5bd4a512899c1fc9b663e0e9994e2ba4714ff82cdcde849387a3de" exitCode=143 Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:33.999779 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70f90972-b5e6-4488-929f-86d580cf0c69","Type":"ContainerDied","Data":"6576b43a4b5269b3be0e29452340a2f2971604596dec66aa7055d831183b3ffd"} Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:33.999814 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70f90972-b5e6-4488-929f-86d580cf0c69","Type":"ContainerDied","Data":"4fd9c9bc2b5bd4a512899c1fc9b663e0e9994e2ba4714ff82cdcde849387a3de"} Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:33.999827 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70f90972-b5e6-4488-929f-86d580cf0c69","Type":"ContainerDied","Data":"5d20249d89d0d3a668c490b25fd976656e17325facfeb0bb337a07b3e9d60564"} Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:33.999840 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d20249d89d0d3a668c490b25fd976656e17325facfeb0bb337a07b3e9d60564" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.002472 4698 generic.go:334] "Generic (PLEG): container finished" podID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerID="949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8" exitCode=143 Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.002527 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a","Type":"ContainerDied","Data":"949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8"} Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.062446 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.176962 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-combined-ca-bundle\") pod \"70f90972-b5e6-4488-929f-86d580cf0c69\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.177090 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjxxw\" (UniqueName: \"kubernetes.io/projected/70f90972-b5e6-4488-929f-86d580cf0c69-kube-api-access-fjxxw\") pod \"70f90972-b5e6-4488-929f-86d580cf0c69\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.177124 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-config-data\") pod \"70f90972-b5e6-4488-929f-86d580cf0c69\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.177168 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f90972-b5e6-4488-929f-86d580cf0c69-logs\") pod \"70f90972-b5e6-4488-929f-86d580cf0c69\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.177242 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-internal-tls-certs\") pod \"70f90972-b5e6-4488-929f-86d580cf0c69\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.177268 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-public-tls-certs\") pod \"70f90972-b5e6-4488-929f-86d580cf0c69\" (UID: \"70f90972-b5e6-4488-929f-86d580cf0c69\") " Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.178046 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70f90972-b5e6-4488-929f-86d580cf0c69-logs" (OuterVolumeSpecName: "logs") pod "70f90972-b5e6-4488-929f-86d580cf0c69" (UID: "70f90972-b5e6-4488-929f-86d580cf0c69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.200407 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f90972-b5e6-4488-929f-86d580cf0c69-kube-api-access-fjxxw" (OuterVolumeSpecName: "kube-api-access-fjxxw") pod "70f90972-b5e6-4488-929f-86d580cf0c69" (UID: "70f90972-b5e6-4488-929f-86d580cf0c69"). InnerVolumeSpecName "kube-api-access-fjxxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.213896 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70f90972-b5e6-4488-929f-86d580cf0c69" (UID: "70f90972-b5e6-4488-929f-86d580cf0c69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.219984 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-config-data" (OuterVolumeSpecName: "config-data") pod "70f90972-b5e6-4488-929f-86d580cf0c69" (UID: "70f90972-b5e6-4488-929f-86d580cf0c69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.264222 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "70f90972-b5e6-4488-929f-86d580cf0c69" (UID: "70f90972-b5e6-4488-929f-86d580cf0c69"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.280331 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.280389 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjxxw\" (UniqueName: \"kubernetes.io/projected/70f90972-b5e6-4488-929f-86d580cf0c69-kube-api-access-fjxxw\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.280406 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.280417 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f90972-b5e6-4488-929f-86d580cf0c69-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.280430 4698 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.293109 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "70f90972-b5e6-4488-929f-86d580cf0c69" (UID: "70f90972-b5e6-4488-929f-86d580cf0c69"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:34 crc kubenswrapper[4698]: I1006 12:06:34.382924 4698 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f90972-b5e6-4488-929f-86d580cf0c69-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.015050 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.068223 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.087964 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.107031 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:35 crc kubenswrapper[4698]: E1006 12:06:35.107596 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f90972-b5e6-4488-929f-86d580cf0c69" containerName="nova-api-api" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.107618 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f90972-b5e6-4488-929f-86d580cf0c69" containerName="nova-api-api" Oct 06 12:06:35 crc kubenswrapper[4698]: E1006 12:06:35.107634 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec80afd9-0c75-4270-9b9a-c9f0380a3a86" containerName="nova-manage" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.107642 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec80afd9-0c75-4270-9b9a-c9f0380a3a86" containerName="nova-manage" Oct 06 12:06:35 crc kubenswrapper[4698]: E1006 12:06:35.107656 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df91b94-ef4b-4b23-9401-159a50392bb8" containerName="init" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.107664 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df91b94-ef4b-4b23-9401-159a50392bb8" containerName="init" Oct 06 12:06:35 crc kubenswrapper[4698]: E1006 12:06:35.107674 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df91b94-ef4b-4b23-9401-159a50392bb8" containerName="dnsmasq-dns" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.107680 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df91b94-ef4b-4b23-9401-159a50392bb8" containerName="dnsmasq-dns" Oct 06 12:06:35 crc kubenswrapper[4698]: E1006 12:06:35.107695 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f90972-b5e6-4488-929f-86d580cf0c69" containerName="nova-api-log" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.107702 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f90972-b5e6-4488-929f-86d580cf0c69" containerName="nova-api-log" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.107947 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f90972-b5e6-4488-929f-86d580cf0c69" containerName="nova-api-log" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.107972 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df91b94-ef4b-4b23-9401-159a50392bb8" containerName="dnsmasq-dns" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.107982 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec80afd9-0c75-4270-9b9a-c9f0380a3a86" containerName="nova-manage" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.107999 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f90972-b5e6-4488-929f-86d580cf0c69" containerName="nova-api-api" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.109501 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.113532 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.114461 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.114884 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.130296 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.201221 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9pgn\" (UniqueName: \"kubernetes.io/projected/68fa4814-8052-4643-996f-ec7f189756e2-kube-api-access-x9pgn\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.201281 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-public-tls-certs\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.201311 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-config-data\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.201409 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.201493 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68fa4814-8052-4643-996f-ec7f189756e2-logs\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.201566 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.303473 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.303585 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9pgn\" (UniqueName: \"kubernetes.io/projected/68fa4814-8052-4643-996f-ec7f189756e2-kube-api-access-x9pgn\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.303624 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-public-tls-certs\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.303672 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-config-data\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.303764 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.303882 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68fa4814-8052-4643-996f-ec7f189756e2-logs\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.304639 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68fa4814-8052-4643-996f-ec7f189756e2-logs\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.309668 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-public-tls-certs\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.310052 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-config-data\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.310730 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.312123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68fa4814-8052-4643-996f-ec7f189756e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.334540 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9pgn\" (UniqueName: \"kubernetes.io/projected/68fa4814-8052-4643-996f-ec7f189756e2-kube-api-access-x9pgn\") pod \"nova-api-0\" (UID: \"68fa4814-8052-4643-996f-ec7f189756e2\") " pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.343949 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f90972-b5e6-4488-929f-86d580cf0c69" path="/var/lib/kubelet/pods/70f90972-b5e6-4488-929f-86d580cf0c69/volumes" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.435889 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 12:06:35 crc kubenswrapper[4698]: I1006 12:06:35.998512 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 12:06:36 crc kubenswrapper[4698]: I1006 12:06:36.028437 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68fa4814-8052-4643-996f-ec7f189756e2","Type":"ContainerStarted","Data":"63f513d2156d8f4a37c286862d322a2223a7adb88db3b8a81fa136afd7e76f48"} Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.053555 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.063569 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68fa4814-8052-4643-996f-ec7f189756e2","Type":"ContainerStarted","Data":"10951edb6682795bfec06864c10a47f2f78b7c42a8d72703b9c56cb6fdc6e1f2"} Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.063643 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68fa4814-8052-4643-996f-ec7f189756e2","Type":"ContainerStarted","Data":"77cc19ad3f2ad1e692bf654b66bd0abfc91f79cf450390e773aa486e44ea2da7"} Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.067773 4698 generic.go:334] "Generic (PLEG): container finished" podID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerID="013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef" exitCode=0 Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.067830 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.067844 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a","Type":"ContainerDied","Data":"013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef"} Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.067902 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a","Type":"ContainerDied","Data":"c1358257c1900a79e7da47c6970ba484ddde8550c19edc149e8fac6c3eb16ae2"} Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.067926 4698 scope.go:117] "RemoveContainer" containerID="013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.107233 4698 scope.go:117] "RemoveContainer" containerID="949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.132918 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.132893131 podStartE2EDuration="2.132893131s" podCreationTimestamp="2025-10-06 12:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:37.11493456 +0000 UTC m=+1284.527626753" watchObservedRunningTime="2025-10-06 12:06:37.132893131 +0000 UTC m=+1284.545585304" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.153321 4698 scope.go:117] "RemoveContainer" containerID="013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef" Oct 06 12:06:37 crc kubenswrapper[4698]: E1006 12:06:37.154135 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef\": container with ID starting with 013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef not found: ID does not exist" containerID="013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.154205 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef"} err="failed to get container status \"013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef\": rpc error: code = NotFound desc = could not find container \"013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef\": container with ID starting with 013dccbc572996c4439a6a2c3f02b8463106ea3620ba2737d630bc34633f8aef not found: ID does not exist" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.154244 4698 scope.go:117] "RemoveContainer" containerID="949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8" Oct 06 12:06:37 crc kubenswrapper[4698]: E1006 12:06:37.154909 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8\": container with ID starting with 949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8 not found: ID does not exist" containerID="949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.154948 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8"} err="failed to get container status \"949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8\": rpc error: code = NotFound desc = could not find container \"949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8\": container with ID starting with 949054b320be64df0b85c8ad9f32a161f61c3822968a84b072b99f2ee213feb8 not found: ID does not exist" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.251707 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-logs\") pod \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.251856 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-nova-metadata-tls-certs\") pod \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.251954 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-combined-ca-bundle\") pod \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.252004 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z5l9\" (UniqueName: \"kubernetes.io/projected/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-kube-api-access-6z5l9\") pod \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.252066 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-config-data\") pod \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\" (UID: \"3ecc5117-7abf-481a-8d5e-b8d4efef7b5a\") " Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.253275 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-logs" (OuterVolumeSpecName: "logs") pod "3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" (UID: "3ecc5117-7abf-481a-8d5e-b8d4efef7b5a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.253438 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-logs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.258727 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-kube-api-access-6z5l9" (OuterVolumeSpecName: "kube-api-access-6z5l9") pod "3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" (UID: "3ecc5117-7abf-481a-8d5e-b8d4efef7b5a"). InnerVolumeSpecName "kube-api-access-6z5l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.288837 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" (UID: "3ecc5117-7abf-481a-8d5e-b8d4efef7b5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.290992 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-config-data" (OuterVolumeSpecName: "config-data") pod "3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" (UID: "3ecc5117-7abf-481a-8d5e-b8d4efef7b5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.317968 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" (UID: "3ecc5117-7abf-481a-8d5e-b8d4efef7b5a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.358101 4698 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.358157 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.358173 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z5l9\" (UniqueName: \"kubernetes.io/projected/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-kube-api-access-6z5l9\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.358189 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.401664 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.411411 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.431156 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:37 crc kubenswrapper[4698]: E1006 12:06:37.431713 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerName="nova-metadata-metadata" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.431758 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerName="nova-metadata-metadata" Oct 06 12:06:37 crc kubenswrapper[4698]: E1006 12:06:37.431792 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerName="nova-metadata-log" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.431801 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerName="nova-metadata-log" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.432098 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerName="nova-metadata-metadata" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.432123 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" containerName="nova-metadata-log" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.433494 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.436782 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.438739 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.447998 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.562751 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3a05a9-7f25-4408-91b3-0ffa68c55545-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.562819 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3a05a9-7f25-4408-91b3-0ffa68c55545-config-data\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.563067 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c3a05a9-7f25-4408-91b3-0ffa68c55545-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.563125 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c3a05a9-7f25-4408-91b3-0ffa68c55545-logs\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.563230 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72vzw\" (UniqueName: \"kubernetes.io/projected/7c3a05a9-7f25-4408-91b3-0ffa68c55545-kube-api-access-72vzw\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.665753 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72vzw\" (UniqueName: \"kubernetes.io/projected/7c3a05a9-7f25-4408-91b3-0ffa68c55545-kube-api-access-72vzw\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.665901 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3a05a9-7f25-4408-91b3-0ffa68c55545-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.665936 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3a05a9-7f25-4408-91b3-0ffa68c55545-config-data\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.666051 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c3a05a9-7f25-4408-91b3-0ffa68c55545-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.666078 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c3a05a9-7f25-4408-91b3-0ffa68c55545-logs\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.666562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c3a05a9-7f25-4408-91b3-0ffa68c55545-logs\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.671576 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c3a05a9-7f25-4408-91b3-0ffa68c55545-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.673610 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3a05a9-7f25-4408-91b3-0ffa68c55545-config-data\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.674484 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3a05a9-7f25-4408-91b3-0ffa68c55545-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.684892 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72vzw\" (UniqueName: \"kubernetes.io/projected/7c3a05a9-7f25-4408-91b3-0ffa68c55545-kube-api-access-72vzw\") pod \"nova-metadata-0\" (UID: \"7c3a05a9-7f25-4408-91b3-0ffa68c55545\") " pod="openstack/nova-metadata-0" Oct 06 12:06:37 crc kubenswrapper[4698]: I1006 12:06:37.753722 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 12:06:38 crc kubenswrapper[4698]: I1006 12:06:38.333254 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 12:06:38 crc kubenswrapper[4698]: W1006 12:06:38.369973 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c3a05a9_7f25_4408_91b3_0ffa68c55545.slice/crio-f46800c1b988b714cec88b0de42beed4f70cf5c09d9b641768691ba29670e583 WatchSource:0}: Error finding container f46800c1b988b714cec88b0de42beed4f70cf5c09d9b641768691ba29670e583: Status 404 returned error can't find the container with id f46800c1b988b714cec88b0de42beed4f70cf5c09d9b641768691ba29670e583 Oct 06 12:06:38 crc kubenswrapper[4698]: E1006 12:06:38.742604 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413 is running failed: container process not found" containerID="7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 12:06:38 crc kubenswrapper[4698]: E1006 12:06:38.743425 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413 is running failed: container process not found" containerID="7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 12:06:38 crc kubenswrapper[4698]: E1006 12:06:38.743630 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413 is running failed: container process not found" containerID="7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 12:06:38 crc kubenswrapper[4698]: E1006 12:06:38.743659 4698 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6b4d3b77-9014-4eec-96e1-c31df74e6a14" containerName="nova-scheduler-scheduler" Oct 06 12:06:38 crc kubenswrapper[4698]: I1006 12:06:38.861700 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:06:38 crc kubenswrapper[4698]: I1006 12:06:38.996855 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-combined-ca-bundle\") pod \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " Oct 06 12:06:38 crc kubenswrapper[4698]: I1006 12:06:38.996965 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9dl6\" (UniqueName: \"kubernetes.io/projected/6b4d3b77-9014-4eec-96e1-c31df74e6a14-kube-api-access-l9dl6\") pod \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " Oct 06 12:06:38 crc kubenswrapper[4698]: I1006 12:06:38.997033 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-config-data\") pod \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\" (UID: \"6b4d3b77-9014-4eec-96e1-c31df74e6a14\") " Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.006148 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b4d3b77-9014-4eec-96e1-c31df74e6a14-kube-api-access-l9dl6" (OuterVolumeSpecName: "kube-api-access-l9dl6") pod "6b4d3b77-9014-4eec-96e1-c31df74e6a14" (UID: "6b4d3b77-9014-4eec-96e1-c31df74e6a14"). InnerVolumeSpecName "kube-api-access-l9dl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.046623 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b4d3b77-9014-4eec-96e1-c31df74e6a14" (UID: "6b4d3b77-9014-4eec-96e1-c31df74e6a14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.060165 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-config-data" (OuterVolumeSpecName: "config-data") pod "6b4d3b77-9014-4eec-96e1-c31df74e6a14" (UID: "6b4d3b77-9014-4eec-96e1-c31df74e6a14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.098223 4698 generic.go:334] "Generic (PLEG): container finished" podID="6b4d3b77-9014-4eec-96e1-c31df74e6a14" containerID="7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" exitCode=0 Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.098309 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.098353 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b4d3b77-9014-4eec-96e1-c31df74e6a14","Type":"ContainerDied","Data":"7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413"} Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.098399 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b4d3b77-9014-4eec-96e1-c31df74e6a14","Type":"ContainerDied","Data":"085c0d7a27aa8809583f7e45e90a1040f0a222dc5f171733eb4b1f6d4c873e98"} Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.098421 4698 scope.go:117] "RemoveContainer" containerID="7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.099812 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.099855 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9dl6\" (UniqueName: \"kubernetes.io/projected/6b4d3b77-9014-4eec-96e1-c31df74e6a14-kube-api-access-l9dl6\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.099873 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b4d3b77-9014-4eec-96e1-c31df74e6a14-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.108407 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7c3a05a9-7f25-4408-91b3-0ffa68c55545","Type":"ContainerStarted","Data":"fb72a64363456af42e516f7d6d3e3aa1ae399871444c951e126aa0d347d5cf67"} Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.108466 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7c3a05a9-7f25-4408-91b3-0ffa68c55545","Type":"ContainerStarted","Data":"a2e009b99c8c3f2d751e65b1fbe79385c5de36dfb8621506ca73a7c2bcbd74f0"} Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.108480 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7c3a05a9-7f25-4408-91b3-0ffa68c55545","Type":"ContainerStarted","Data":"f46800c1b988b714cec88b0de42beed4f70cf5c09d9b641768691ba29670e583"} Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.145036 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.144990006 podStartE2EDuration="2.144990006s" podCreationTimestamp="2025-10-06 12:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:39.13011815 +0000 UTC m=+1286.542810323" watchObservedRunningTime="2025-10-06 12:06:39.144990006 +0000 UTC m=+1286.557682179" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.149880 4698 scope.go:117] "RemoveContainer" containerID="7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" Oct 06 12:06:39 crc kubenswrapper[4698]: E1006 12:06:39.151676 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413\": container with ID starting with 7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413 not found: ID does not exist" containerID="7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.151739 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413"} err="failed to get container status \"7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413\": rpc error: code = NotFound desc = could not find container \"7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413\": container with ID starting with 7a82d040e592628b55419bea080f169979f5b447fb833ab0f18a342865d85413 not found: ID does not exist" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.172694 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.187511 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.198708 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:39 crc kubenswrapper[4698]: E1006 12:06:39.199456 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b4d3b77-9014-4eec-96e1-c31df74e6a14" containerName="nova-scheduler-scheduler" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.199479 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b4d3b77-9014-4eec-96e1-c31df74e6a14" containerName="nova-scheduler-scheduler" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.199719 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b4d3b77-9014-4eec-96e1-c31df74e6a14" containerName="nova-scheduler-scheduler" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.200693 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.204563 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.221844 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.307511 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1cd5e9b-2297-4e73-91d5-a1cd00ff8263-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.307673 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1cd5e9b-2297-4e73-91d5-a1cd00ff8263-config-data\") pod \"nova-scheduler-0\" (UID: \"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.308032 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rvbw\" (UniqueName: \"kubernetes.io/projected/d1cd5e9b-2297-4e73-91d5-a1cd00ff8263-kube-api-access-9rvbw\") pod \"nova-scheduler-0\" (UID: \"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.353419 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ecc5117-7abf-481a-8d5e-b8d4efef7b5a" path="/var/lib/kubelet/pods/3ecc5117-7abf-481a-8d5e-b8d4efef7b5a/volumes" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.354271 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b4d3b77-9014-4eec-96e1-c31df74e6a14" path="/var/lib/kubelet/pods/6b4d3b77-9014-4eec-96e1-c31df74e6a14/volumes" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.410253 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rvbw\" (UniqueName: \"kubernetes.io/projected/d1cd5e9b-2297-4e73-91d5-a1cd00ff8263-kube-api-access-9rvbw\") pod \"nova-scheduler-0\" (UID: \"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.410491 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1cd5e9b-2297-4e73-91d5-a1cd00ff8263-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.410607 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1cd5e9b-2297-4e73-91d5-a1cd00ff8263-config-data\") pod \"nova-scheduler-0\" (UID: \"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.418129 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1cd5e9b-2297-4e73-91d5-a1cd00ff8263-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.418733 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1cd5e9b-2297-4e73-91d5-a1cd00ff8263-config-data\") pod \"nova-scheduler-0\" (UID: \"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.435586 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rvbw\" (UniqueName: \"kubernetes.io/projected/d1cd5e9b-2297-4e73-91d5-a1cd00ff8263-kube-api-access-9rvbw\") pod \"nova-scheduler-0\" (UID: \"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263\") " pod="openstack/nova-scheduler-0" Oct 06 12:06:39 crc kubenswrapper[4698]: I1006 12:06:39.523516 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 12:06:40 crc kubenswrapper[4698]: I1006 12:06:40.089753 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 12:06:40 crc kubenswrapper[4698]: I1006 12:06:40.123485 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263","Type":"ContainerStarted","Data":"3b01fac07800cedf3f132bbcd0da8cce572879d214778643a4e4d1f09bbdcd24"} Oct 06 12:06:41 crc kubenswrapper[4698]: I1006 12:06:41.139657 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1cd5e9b-2297-4e73-91d5-a1cd00ff8263","Type":"ContainerStarted","Data":"53b17f2651091be6585c291bd2a1774bae620b7a1956e7b89cf4ae8de1deb533"} Oct 06 12:06:41 crc kubenswrapper[4698]: I1006 12:06:41.166440 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.166418187 podStartE2EDuration="2.166418187s" podCreationTimestamp="2025-10-06 12:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:06:41.162883532 +0000 UTC m=+1288.575575725" watchObservedRunningTime="2025-10-06 12:06:41.166418187 +0000 UTC m=+1288.579110360" Oct 06 12:06:42 crc kubenswrapper[4698]: I1006 12:06:42.754049 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 12:06:42 crc kubenswrapper[4698]: I1006 12:06:42.754427 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 12:06:44 crc kubenswrapper[4698]: I1006 12:06:44.525198 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 12:06:45 crc kubenswrapper[4698]: I1006 12:06:45.437109 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:06:45 crc kubenswrapper[4698]: I1006 12:06:45.437158 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 12:06:46 crc kubenswrapper[4698]: I1006 12:06:46.452355 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="68fa4814-8052-4643-996f-ec7f189756e2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:46 crc kubenswrapper[4698]: I1006 12:06:46.452368 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="68fa4814-8052-4643-996f-ec7f189756e2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:47 crc kubenswrapper[4698]: I1006 12:06:47.754655 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 12:06:47 crc kubenswrapper[4698]: I1006 12:06:47.755268 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 12:06:48 crc kubenswrapper[4698]: I1006 12:06:48.773556 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7c3a05a9-7f25-4408-91b3-0ffa68c55545" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:48 crc kubenswrapper[4698]: I1006 12:06:48.773573 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7c3a05a9-7f25-4408-91b3-0ffa68c55545" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 12:06:49 crc kubenswrapper[4698]: I1006 12:06:49.524761 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 12:06:49 crc kubenswrapper[4698]: I1006 12:06:49.579675 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 12:06:50 crc kubenswrapper[4698]: I1006 12:06:50.294432 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 12:06:53 crc kubenswrapper[4698]: I1006 12:06:53.232219 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 06 12:06:55 crc kubenswrapper[4698]: I1006 12:06:55.235152 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:06:55 crc kubenswrapper[4698]: I1006 12:06:55.235743 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:06:55 crc kubenswrapper[4698]: I1006 12:06:55.479157 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 12:06:55 crc kubenswrapper[4698]: I1006 12:06:55.480200 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 12:06:55 crc kubenswrapper[4698]: I1006 12:06:55.482737 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 12:06:55 crc kubenswrapper[4698]: I1006 12:06:55.492976 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 12:06:56 crc kubenswrapper[4698]: I1006 12:06:56.346881 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 12:06:56 crc kubenswrapper[4698]: I1006 12:06:56.358127 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 12:06:57 crc kubenswrapper[4698]: I1006 12:06:57.763139 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 12:06:57 crc kubenswrapper[4698]: I1006 12:06:57.773745 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 12:06:57 crc kubenswrapper[4698]: I1006 12:06:57.779795 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 12:06:58 crc kubenswrapper[4698]: I1006 12:06:58.384918 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 12:07:06 crc kubenswrapper[4698]: I1006 12:07:06.394829 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:07 crc kubenswrapper[4698]: I1006 12:07:07.435051 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:11 crc kubenswrapper[4698]: I1006 12:07:11.609730 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4815e17b-a929-4914-91e6-6e9b3ef94561" containerName="rabbitmq" containerID="cri-o://c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16" gracePeriod=604795 Oct 06 12:07:12 crc kubenswrapper[4698]: I1006 12:07:12.371533 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="90c98585-3fd3-42cb-b011-01ecd1227057" containerName="rabbitmq" containerID="cri-o://32b383e4c41c409e81720d1e2b0d2ceeb0a1bc921ec0ba7ec3db659eded7f7ea" gracePeriod=604796 Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.219587 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.352660 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.352726 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4815e17b-a929-4914-91e6-6e9b3ef94561-pod-info\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.352765 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4815e17b-a929-4914-91e6-6e9b3ef94561-erlang-cookie-secret\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.352872 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-confd\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.352899 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6jph\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-kube-api-access-h6jph\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.352962 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-plugins\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.352994 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-config-data\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.354279 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.354477 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-tls\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.355397 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-erlang-cookie\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.355571 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-server-conf\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.355718 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-plugins-conf\") pod \"4815e17b-a929-4914-91e6-6e9b3ef94561\" (UID: \"4815e17b-a929-4914-91e6-6e9b3ef94561\") " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.356703 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.356931 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.358067 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.363233 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.363597 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4815e17b-a929-4914-91e6-6e9b3ef94561-pod-info" (OuterVolumeSpecName: "pod-info") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.363656 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.366168 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-kube-api-access-h6jph" (OuterVolumeSpecName: "kube-api-access-h6jph") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "kube-api-access-h6jph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.383280 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4815e17b-a929-4914-91e6-6e9b3ef94561-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.433626 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-config-data" (OuterVolumeSpecName: "config-data") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.441901 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-server-conf" (OuterVolumeSpecName: "server-conf") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.463609 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.467276 4698 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4815e17b-a929-4914-91e6-6e9b3ef94561-pod-info\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.467342 4698 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4815e17b-a929-4914-91e6-6e9b3ef94561-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.467366 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6jph\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-kube-api-access-h6jph\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.467380 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.467395 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.467409 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.467424 4698 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-server-conf\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.467437 4698 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4815e17b-a929-4914-91e6-6e9b3ef94561-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.513368 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.530452 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4815e17b-a929-4914-91e6-6e9b3ef94561" (UID: "4815e17b-a929-4914-91e6-6e9b3ef94561"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.569727 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.570059 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4815e17b-a929-4914-91e6-6e9b3ef94561-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.652854 4698 generic.go:334] "Generic (PLEG): container finished" podID="90c98585-3fd3-42cb-b011-01ecd1227057" containerID="32b383e4c41c409e81720d1e2b0d2ceeb0a1bc921ec0ba7ec3db659eded7f7ea" exitCode=0 Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.652980 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90c98585-3fd3-42cb-b011-01ecd1227057","Type":"ContainerDied","Data":"32b383e4c41c409e81720d1e2b0d2ceeb0a1bc921ec0ba7ec3db659eded7f7ea"} Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.660499 4698 generic.go:334] "Generic (PLEG): container finished" podID="4815e17b-a929-4914-91e6-6e9b3ef94561" containerID="c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16" exitCode=0 Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.660567 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4815e17b-a929-4914-91e6-6e9b3ef94561","Type":"ContainerDied","Data":"c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16"} Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.660607 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4815e17b-a929-4914-91e6-6e9b3ef94561","Type":"ContainerDied","Data":"b89da9be782cc566d3e793a532e4d5c223dce72daccc95dba515bd37d0578566"} Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.660631 4698 scope.go:117] "RemoveContainer" containerID="c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.660809 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.692211 4698 scope.go:117] "RemoveContainer" containerID="0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.714394 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.748106 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.752958 4698 scope.go:117] "RemoveContainer" containerID="c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16" Oct 06 12:07:18 crc kubenswrapper[4698]: E1006 12:07:18.757186 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16\": container with ID starting with c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16 not found: ID does not exist" containerID="c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.757228 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16"} err="failed to get container status \"c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16\": rpc error: code = NotFound desc = could not find container \"c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16\": container with ID starting with c639764697d5bb9fa39deb68bfcdd72530b7f9cd5779744d1c65727b3f806e16 not found: ID does not exist" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.757257 4698 scope.go:117] "RemoveContainer" containerID="0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505" Oct 06 12:07:18 crc kubenswrapper[4698]: E1006 12:07:18.761591 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505\": container with ID starting with 0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505 not found: ID does not exist" containerID="0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.761663 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505"} err="failed to get container status \"0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505\": rpc error: code = NotFound desc = could not find container \"0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505\": container with ID starting with 0d21dd3361e91cdd3bdde8a236be1c2ac4bbcb29c83e615db84763a703feb505 not found: ID does not exist" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.765181 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:18 crc kubenswrapper[4698]: E1006 12:07:18.765763 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4815e17b-a929-4914-91e6-6e9b3ef94561" containerName="rabbitmq" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.765786 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4815e17b-a929-4914-91e6-6e9b3ef94561" containerName="rabbitmq" Oct 06 12:07:18 crc kubenswrapper[4698]: E1006 12:07:18.765822 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4815e17b-a929-4914-91e6-6e9b3ef94561" containerName="setup-container" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.765830 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4815e17b-a929-4914-91e6-6e9b3ef94561" containerName="setup-container" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.766089 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4815e17b-a929-4914-91e6-6e9b3ef94561" containerName="rabbitmq" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.767479 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.772495 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.772628 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.772784 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.772849 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-29ppb" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.773076 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.773230 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.773360 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.774838 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.879840 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.880280 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/770a4197-e506-41c8-921b-31db7abd83fe-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.880326 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.880387 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/770a4197-e506-41c8-921b-31db7abd83fe-pod-info\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.880434 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.880485 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scfgt\" (UniqueName: \"kubernetes.io/projected/770a4197-e506-41c8-921b-31db7abd83fe-kube-api-access-scfgt\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.880511 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.880526 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/770a4197-e506-41c8-921b-31db7abd83fe-server-conf\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.880547 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.880574 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/770a4197-e506-41c8-921b-31db7abd83fe-config-data\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.880597 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/770a4197-e506-41c8-921b-31db7abd83fe-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986404 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986474 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/770a4197-e506-41c8-921b-31db7abd83fe-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986507 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/770a4197-e506-41c8-921b-31db7abd83fe-pod-info\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986614 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986657 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scfgt\" (UniqueName: \"kubernetes.io/projected/770a4197-e506-41c8-921b-31db7abd83fe-kube-api-access-scfgt\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986685 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986702 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/770a4197-e506-41c8-921b-31db7abd83fe-server-conf\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986722 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986749 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/770a4197-e506-41c8-921b-31db7abd83fe-config-data\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:18 crc kubenswrapper[4698]: I1006 12:07:18.986766 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/770a4197-e506-41c8-921b-31db7abd83fe-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:18.996001 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/770a4197-e506-41c8-921b-31db7abd83fe-server-conf\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:18.996468 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:18.997246 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/770a4197-e506-41c8-921b-31db7abd83fe-config-data\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.001511 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.001883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.016299 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.027300 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/770a4197-e506-41c8-921b-31db7abd83fe-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.030848 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/770a4197-e506-41c8-921b-31db7abd83fe-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.044625 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/770a4197-e506-41c8-921b-31db7abd83fe-pod-info\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.060720 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/770a4197-e506-41c8-921b-31db7abd83fe-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.072820 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scfgt\" (UniqueName: \"kubernetes.io/projected/770a4197-e506-41c8-921b-31db7abd83fe-kube-api-access-scfgt\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.104176 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"770a4197-e506-41c8-921b-31db7abd83fe\") " pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.119583 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.263753 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.365278 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4815e17b-a929-4914-91e6-6e9b3ef94561" path="/var/lib/kubelet/pods/4815e17b-a929-4914-91e6-6e9b3ef94561/volumes" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.402897 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-plugins\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.402954 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s24kg\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-kube-api-access-s24kg\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.402993 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-server-conf\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.403073 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90c98585-3fd3-42cb-b011-01ecd1227057-pod-info\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.403199 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90c98585-3fd3-42cb-b011-01ecd1227057-erlang-cookie-secret\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.403287 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.403318 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-plugins-conf\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.403351 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-tls\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.403404 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-config-data\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.403441 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-confd\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.403562 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-erlang-cookie\") pod \"90c98585-3fd3-42cb-b011-01ecd1227057\" (UID: \"90c98585-3fd3-42cb-b011-01ecd1227057\") " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.405102 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.406313 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.408283 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.411369 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.415790 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-kube-api-access-s24kg" (OuterVolumeSpecName: "kube-api-access-s24kg") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "kube-api-access-s24kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.416984 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.418569 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90c98585-3fd3-42cb-b011-01ecd1227057-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.422175 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/90c98585-3fd3-42cb-b011-01ecd1227057-pod-info" (OuterVolumeSpecName: "pod-info") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.456938 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-config-data" (OuterVolumeSpecName: "config-data") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.467079 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-server-conf" (OuterVolumeSpecName: "server-conf") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.508051 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.508093 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.508107 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s24kg\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-kube-api-access-s24kg\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.508116 4698 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-server-conf\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.508127 4698 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90c98585-3fd3-42cb-b011-01ecd1227057-pod-info\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.508137 4698 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90c98585-3fd3-42cb-b011-01ecd1227057-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.508161 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.508173 4698 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.508185 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.508194 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90c98585-3fd3-42cb-b011-01ecd1227057-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.540508 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.583864 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "90c98585-3fd3-42cb-b011-01ecd1227057" (UID: "90c98585-3fd3-42cb-b011-01ecd1227057"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.610453 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.610839 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90c98585-3fd3-42cb-b011-01ecd1227057-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.675996 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90c98585-3fd3-42cb-b011-01ecd1227057","Type":"ContainerDied","Data":"c97ccd2616b8442690e0609c71eecdf793d9e598f92a8539a2647a743870bacb"} Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.676696 4698 scope.go:117] "RemoveContainer" containerID="32b383e4c41c409e81720d1e2b0d2ceeb0a1bc921ec0ba7ec3db659eded7f7ea" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.676129 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.717713 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.717738 4698 scope.go:117] "RemoveContainer" containerID="e08d2a623d5b99981a53f1fd0656087540b5036f8e39b6556f7d21bc8e446234" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.735193 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.746680 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.773723 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:19 crc kubenswrapper[4698]: E1006 12:07:19.774664 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c98585-3fd3-42cb-b011-01ecd1227057" containerName="setup-container" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.774688 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c98585-3fd3-42cb-b011-01ecd1227057" containerName="setup-container" Oct 06 12:07:19 crc kubenswrapper[4698]: E1006 12:07:19.774761 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c98585-3fd3-42cb-b011-01ecd1227057" containerName="rabbitmq" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.774792 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c98585-3fd3-42cb-b011-01ecd1227057" containerName="rabbitmq" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.779493 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="90c98585-3fd3-42cb-b011-01ecd1227057" containerName="rabbitmq" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.781215 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.788526 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.788778 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.788928 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r9gc7" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.789288 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.789471 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.789704 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.790486 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.793800 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.920772 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c4e83e2-715d-4418-a8b2-c4fe36f46192-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.920857 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c4e83e2-715d-4418-a8b2-c4fe36f46192-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.920886 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.920917 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.921202 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c4e83e2-715d-4418-a8b2-c4fe36f46192-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.921287 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c4e83e2-715d-4418-a8b2-c4fe36f46192-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.921398 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r59mc\" (UniqueName: \"kubernetes.io/projected/0c4e83e2-715d-4418-a8b2-c4fe36f46192-kube-api-access-r59mc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.921478 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c4e83e2-715d-4418-a8b2-c4fe36f46192-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.921562 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.921609 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:19 crc kubenswrapper[4698]: I1006 12:07:19.921744 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.023842 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c4e83e2-715d-4418-a8b2-c4fe36f46192-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.023986 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c4e83e2-715d-4418-a8b2-c4fe36f46192-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.024040 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.024089 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.024171 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c4e83e2-715d-4418-a8b2-c4fe36f46192-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.024230 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c4e83e2-715d-4418-a8b2-c4fe36f46192-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.024301 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r59mc\" (UniqueName: \"kubernetes.io/projected/0c4e83e2-715d-4418-a8b2-c4fe36f46192-kube-api-access-r59mc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.024351 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c4e83e2-715d-4418-a8b2-c4fe36f46192-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.024395 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.024445 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.024537 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.025275 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.025283 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.025502 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.025772 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c4e83e2-715d-4418-a8b2-c4fe36f46192-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.026263 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c4e83e2-715d-4418-a8b2-c4fe36f46192-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.029792 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c4e83e2-715d-4418-a8b2-c4fe36f46192-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.030075 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.030462 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c4e83e2-715d-4418-a8b2-c4fe36f46192-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.036856 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c4e83e2-715d-4418-a8b2-c4fe36f46192-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.040254 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c4e83e2-715d-4418-a8b2-c4fe36f46192-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.049215 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r59mc\" (UniqueName: \"kubernetes.io/projected/0c4e83e2-715d-4418-a8b2-c4fe36f46192-kube-api-access-r59mc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.063735 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0c4e83e2-715d-4418-a8b2-c4fe36f46192\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.118264 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.627818 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 12:07:20 crc kubenswrapper[4698]: W1006 12:07:20.631690 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c4e83e2_715d_4418_a8b2_c4fe36f46192.slice/crio-72e71fbd0627aceecc46ef5fb14a8f3f3660826ca873b4fa9460343ddf0eb454 WatchSource:0}: Error finding container 72e71fbd0627aceecc46ef5fb14a8f3f3660826ca873b4fa9460343ddf0eb454: Status 404 returned error can't find the container with id 72e71fbd0627aceecc46ef5fb14a8f3f3660826ca873b4fa9460343ddf0eb454 Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.698473 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"770a4197-e506-41c8-921b-31db7abd83fe","Type":"ContainerStarted","Data":"bfce0758dbb6eb14bca3793cb72c8f4a8fa3f681809a119e6753bfda16f04a91"} Oct 06 12:07:20 crc kubenswrapper[4698]: I1006 12:07:20.699761 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0c4e83e2-715d-4418-a8b2-c4fe36f46192","Type":"ContainerStarted","Data":"72e71fbd0627aceecc46ef5fb14a8f3f3660826ca873b4fa9460343ddf0eb454"} Oct 06 12:07:21 crc kubenswrapper[4698]: I1006 12:07:21.348145 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90c98585-3fd3-42cb-b011-01ecd1227057" path="/var/lib/kubelet/pods/90c98585-3fd3-42cb-b011-01ecd1227057/volumes" Oct 06 12:07:21 crc kubenswrapper[4698]: I1006 12:07:21.947653 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6jmqb"] Oct 06 12:07:21 crc kubenswrapper[4698]: I1006 12:07:21.949501 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:21 crc kubenswrapper[4698]: I1006 12:07:21.952122 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.017101 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6jmqb"] Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.079334 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-svc\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.079381 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-config\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.079447 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.079481 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzrgc\" (UniqueName: \"kubernetes.io/projected/d380e31e-83cd-4346-af29-357a8be018e9-kube-api-access-xzrgc\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.079499 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.079516 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.079544 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.181573 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-svc\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.181624 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-config\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.181693 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.181711 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrgc\" (UniqueName: \"kubernetes.io/projected/d380e31e-83cd-4346-af29-357a8be018e9-kube-api-access-xzrgc\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.181727 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.181747 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.181776 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.182751 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.183370 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-svc\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.183947 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.184275 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-config\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.184525 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.184862 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.225532 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6jmqb"] Oct 06 12:07:22 crc kubenswrapper[4698]: E1006 12:07:22.226579 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-xzrgc], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" podUID="d380e31e-83cd-4346-af29-357a8be018e9" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.254990 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrgc\" (UniqueName: \"kubernetes.io/projected/d380e31e-83cd-4346-af29-357a8be018e9-kube-api-access-xzrgc\") pod \"dnsmasq-dns-67b789f86c-6jmqb\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.315089 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bcf8b9d95-md65p"] Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.317520 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.323096 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bcf8b9d95-md65p"] Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.399344 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.399548 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-config\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.399642 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-openstack-edpm-ipam\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.399672 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.399780 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rclk\" (UniqueName: \"kubernetes.io/projected/a49ef859-b876-474a-9cd2-4bab9f43799a-kube-api-access-6rclk\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.400245 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.400324 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-dns-svc\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.502388 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-config\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.502480 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-openstack-edpm-ipam\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.502508 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.502560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rclk\" (UniqueName: \"kubernetes.io/projected/a49ef859-b876-474a-9cd2-4bab9f43799a-kube-api-access-6rclk\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.502595 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.502631 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-dns-svc\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.502682 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.503659 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.504227 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-config\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.504758 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-openstack-edpm-ipam\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.505376 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.506258 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.506491 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a49ef859-b876-474a-9cd2-4bab9f43799a-dns-svc\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.540098 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rclk\" (UniqueName: \"kubernetes.io/projected/a49ef859-b876-474a-9cd2-4bab9f43799a-kube-api-access-6rclk\") pod \"dnsmasq-dns-6bcf8b9d95-md65p\" (UID: \"a49ef859-b876-474a-9cd2-4bab9f43799a\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.648471 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.723406 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"770a4197-e506-41c8-921b-31db7abd83fe","Type":"ContainerStarted","Data":"4e2d9ae6ee6f547bbcdb3d570f477e5ec1f112ef2e0f2cdeb5cdc6504c84d523"} Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.726708 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.726902 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0c4e83e2-715d-4418-a8b2-c4fe36f46192","Type":"ContainerStarted","Data":"6b21cb3ec0897bda28d521355368be16f1b35a6982e702b2f8d7c436a917d111"} Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.805103 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.948150 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzrgc\" (UniqueName: \"kubernetes.io/projected/d380e31e-83cd-4346-af29-357a8be018e9-kube-api-access-xzrgc\") pod \"d380e31e-83cd-4346-af29-357a8be018e9\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.948232 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-svc\") pod \"d380e31e-83cd-4346-af29-357a8be018e9\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.948389 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-openstack-edpm-ipam\") pod \"d380e31e-83cd-4346-af29-357a8be018e9\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.948422 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-config\") pod \"d380e31e-83cd-4346-af29-357a8be018e9\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.948490 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-nb\") pod \"d380e31e-83cd-4346-af29-357a8be018e9\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.948574 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-sb\") pod \"d380e31e-83cd-4346-af29-357a8be018e9\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.948629 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-swift-storage-0\") pod \"d380e31e-83cd-4346-af29-357a8be018e9\" (UID: \"d380e31e-83cd-4346-af29-357a8be018e9\") " Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.949711 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d380e31e-83cd-4346-af29-357a8be018e9" (UID: "d380e31e-83cd-4346-af29-357a8be018e9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.949749 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d380e31e-83cd-4346-af29-357a8be018e9" (UID: "d380e31e-83cd-4346-af29-357a8be018e9"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.950113 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-config" (OuterVolumeSpecName: "config") pod "d380e31e-83cd-4346-af29-357a8be018e9" (UID: "d380e31e-83cd-4346-af29-357a8be018e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.950646 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d380e31e-83cd-4346-af29-357a8be018e9" (UID: "d380e31e-83cd-4346-af29-357a8be018e9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.950833 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d380e31e-83cd-4346-af29-357a8be018e9" (UID: "d380e31e-83cd-4346-af29-357a8be018e9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.951197 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d380e31e-83cd-4346-af29-357a8be018e9" (UID: "d380e31e-83cd-4346-af29-357a8be018e9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:22 crc kubenswrapper[4698]: I1006 12:07:22.956367 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d380e31e-83cd-4346-af29-357a8be018e9-kube-api-access-xzrgc" (OuterVolumeSpecName: "kube-api-access-xzrgc") pod "d380e31e-83cd-4346-af29-357a8be018e9" (UID: "d380e31e-83cd-4346-af29-357a8be018e9"). InnerVolumeSpecName "kube-api-access-xzrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.051383 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.051847 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.051862 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.051875 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.051887 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.051902 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzrgc\" (UniqueName: \"kubernetes.io/projected/d380e31e-83cd-4346-af29-357a8be018e9-kube-api-access-xzrgc\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.051917 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380e31e-83cd-4346-af29-357a8be018e9-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.237224 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bcf8b9d95-md65p"] Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.742555 4698 generic.go:334] "Generic (PLEG): container finished" podID="a49ef859-b876-474a-9cd2-4bab9f43799a" containerID="8ea95ef0c63f25377392f34afed078a14ccc8b7592846e9b4271a3b2d3da132b" exitCode=0 Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.742665 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" event={"ID":"a49ef859-b876-474a-9cd2-4bab9f43799a","Type":"ContainerDied","Data":"8ea95ef0c63f25377392f34afed078a14ccc8b7592846e9b4271a3b2d3da132b"} Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.743149 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" event={"ID":"a49ef859-b876-474a-9cd2-4bab9f43799a","Type":"ContainerStarted","Data":"dc2b097d5041420270cc53a28fe573145f9ee563fa68664f15c2776154389319"} Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.743555 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-6jmqb" Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.876455 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6jmqb"] Oct 06 12:07:23 crc kubenswrapper[4698]: I1006 12:07:23.879483 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6jmqb"] Oct 06 12:07:24 crc kubenswrapper[4698]: I1006 12:07:24.049113 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="90c98585-3fd3-42cb-b011-01ecd1227057" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: i/o timeout" Oct 06 12:07:24 crc kubenswrapper[4698]: I1006 12:07:24.757104 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" event={"ID":"a49ef859-b876-474a-9cd2-4bab9f43799a","Type":"ContainerStarted","Data":"9dd6061d8c787a85b2c363c0568b37726294db32ff1f25880770af67dc053d75"} Oct 06 12:07:24 crc kubenswrapper[4698]: I1006 12:07:24.757589 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:24 crc kubenswrapper[4698]: I1006 12:07:24.780066 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" podStartSLOduration=2.780041423 podStartE2EDuration="2.780041423s" podCreationTimestamp="2025-10-06 12:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:07:24.778429114 +0000 UTC m=+1332.191121287" watchObservedRunningTime="2025-10-06 12:07:24.780041423 +0000 UTC m=+1332.192733596" Oct 06 12:07:25 crc kubenswrapper[4698]: I1006 12:07:25.235307 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:07:25 crc kubenswrapper[4698]: I1006 12:07:25.235394 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:07:25 crc kubenswrapper[4698]: I1006 12:07:25.235463 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:07:25 crc kubenswrapper[4698]: I1006 12:07:25.236520 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"715e2c926ea733c39c4353502c94d954bc502215a13b1b5dd34c48e59ae896f3"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:07:25 crc kubenswrapper[4698]: I1006 12:07:25.236587 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://715e2c926ea733c39c4353502c94d954bc502215a13b1b5dd34c48e59ae896f3" gracePeriod=600 Oct 06 12:07:25 crc kubenswrapper[4698]: I1006 12:07:25.346543 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d380e31e-83cd-4346-af29-357a8be018e9" path="/var/lib/kubelet/pods/d380e31e-83cd-4346-af29-357a8be018e9/volumes" Oct 06 12:07:25 crc kubenswrapper[4698]: I1006 12:07:25.777070 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="715e2c926ea733c39c4353502c94d954bc502215a13b1b5dd34c48e59ae896f3" exitCode=0 Oct 06 12:07:25 crc kubenswrapper[4698]: I1006 12:07:25.777211 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"715e2c926ea733c39c4353502c94d954bc502215a13b1b5dd34c48e59ae896f3"} Oct 06 12:07:25 crc kubenswrapper[4698]: I1006 12:07:25.778112 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50"} Oct 06 12:07:25 crc kubenswrapper[4698]: I1006 12:07:25.778148 4698 scope.go:117] "RemoveContainer" containerID="08949ee05d365e895ee66ed6a6e38acc8b8b1f686a7e426a5dbaacabe5cc7044" Oct 06 12:07:32 crc kubenswrapper[4698]: I1006 12:07:32.650618 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bcf8b9d95-md65p" Oct 06 12:07:32 crc kubenswrapper[4698]: I1006 12:07:32.734329 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-p7h28"] Oct 06 12:07:32 crc kubenswrapper[4698]: I1006 12:07:32.734706 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" podUID="bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" containerName="dnsmasq-dns" containerID="cri-o://857b9ed6758cc01ba25edf1d2dfe5f527e7a380c3c07747a7968d4757e626f22" gracePeriod=10 Oct 06 12:07:32 crc kubenswrapper[4698]: I1006 12:07:32.888297 4698 generic.go:334] "Generic (PLEG): container finished" podID="bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" containerID="857b9ed6758cc01ba25edf1d2dfe5f527e7a380c3c07747a7968d4757e626f22" exitCode=0 Oct 06 12:07:32 crc kubenswrapper[4698]: I1006 12:07:32.888345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" event={"ID":"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38","Type":"ContainerDied","Data":"857b9ed6758cc01ba25edf1d2dfe5f527e7a380c3c07747a7968d4757e626f22"} Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.396091 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.529427 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpqg8\" (UniqueName: \"kubernetes.io/projected/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-kube-api-access-zpqg8\") pod \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.529552 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-nb\") pod \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.529660 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-sb\") pod \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.529694 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-svc\") pod \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.529838 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-swift-storage-0\") pod \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.529927 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-config\") pod \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\" (UID: \"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38\") " Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.539203 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-kube-api-access-zpqg8" (OuterVolumeSpecName: "kube-api-access-zpqg8") pod "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" (UID: "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38"). InnerVolumeSpecName "kube-api-access-zpqg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.596217 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" (UID: "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.609676 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" (UID: "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.611612 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" (UID: "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.627621 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-config" (OuterVolumeSpecName: "config") pod "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" (UID: "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.637912 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" (UID: "bf1a732f-d9ad-4b52-ac31-0f76c75a8a38"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.638354 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.638395 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.638407 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpqg8\" (UniqueName: \"kubernetes.io/projected/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-kube-api-access-zpqg8\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.638419 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.638431 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.638441 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.907874 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" event={"ID":"bf1a732f-d9ad-4b52-ac31-0f76c75a8a38","Type":"ContainerDied","Data":"e29c1c3303a857cd21b05ecf2db1d5fc5b1e16163bac9138f977945b3a5df1f0"} Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.908432 4698 scope.go:117] "RemoveContainer" containerID="857b9ed6758cc01ba25edf1d2dfe5f527e7a380c3c07747a7968d4757e626f22" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.907908 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.963206 4698 scope.go:117] "RemoveContainer" containerID="4ddbfd4a42f2806b2d3b34a8e0599067c233e796a9421b143d6031a87d37a4a2" Oct 06 12:07:33 crc kubenswrapper[4698]: I1006 12:07:33.985366 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-p7h28"] Oct 06 12:07:34 crc kubenswrapper[4698]: I1006 12:07:34.026102 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-p7h28"] Oct 06 12:07:35 crc kubenswrapper[4698]: I1006 12:07:35.348687 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" path="/var/lib/kubelet/pods/bf1a732f-d9ad-4b52-ac31-0f76c75a8a38/volumes" Oct 06 12:07:38 crc kubenswrapper[4698]: I1006 12:07:38.192099 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59cf4bdb65-p7h28" podUID="bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.219:5353: i/o timeout" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.424843 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf"] Oct 06 12:07:46 crc kubenswrapper[4698]: E1006 12:07:46.426140 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" containerName="dnsmasq-dns" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.426160 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" containerName="dnsmasq-dns" Oct 06 12:07:46 crc kubenswrapper[4698]: E1006 12:07:46.426201 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" containerName="init" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.426213 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" containerName="init" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.426492 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1a732f-d9ad-4b52-ac31-0f76c75a8a38" containerName="dnsmasq-dns" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.427509 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.435894 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.436375 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.436383 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.436826 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.441190 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf"] Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.480043 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j6fg\" (UniqueName: \"kubernetes.io/projected/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-kube-api-access-7j6fg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.480130 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.480186 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.481753 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.584115 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j6fg\" (UniqueName: \"kubernetes.io/projected/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-kube-api-access-7j6fg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.584212 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.584270 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.584355 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.593752 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.616237 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.616472 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.621126 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j6fg\" (UniqueName: \"kubernetes.io/projected/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-kube-api-access-7j6fg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:46 crc kubenswrapper[4698]: I1006 12:07:46.755440 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:07:47 crc kubenswrapper[4698]: I1006 12:07:47.435269 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf"] Oct 06 12:07:48 crc kubenswrapper[4698]: I1006 12:07:48.113385 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" event={"ID":"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9","Type":"ContainerStarted","Data":"3c9652a9284f3404b8e85c85d950a4c413cee4a2ae9b966fb17033a9e42c1b75"} Oct 06 12:07:54 crc kubenswrapper[4698]: I1006 12:07:54.215237 4698 generic.go:334] "Generic (PLEG): container finished" podID="770a4197-e506-41c8-921b-31db7abd83fe" containerID="4e2d9ae6ee6f547bbcdb3d570f477e5ec1f112ef2e0f2cdeb5cdc6504c84d523" exitCode=0 Oct 06 12:07:54 crc kubenswrapper[4698]: I1006 12:07:54.215366 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"770a4197-e506-41c8-921b-31db7abd83fe","Type":"ContainerDied","Data":"4e2d9ae6ee6f547bbcdb3d570f477e5ec1f112ef2e0f2cdeb5cdc6504c84d523"} Oct 06 12:07:55 crc kubenswrapper[4698]: I1006 12:07:55.244979 4698 generic.go:334] "Generic (PLEG): container finished" podID="0c4e83e2-715d-4418-a8b2-c4fe36f46192" containerID="6b21cb3ec0897bda28d521355368be16f1b35a6982e702b2f8d7c436a917d111" exitCode=0 Oct 06 12:07:55 crc kubenswrapper[4698]: I1006 12:07:55.245098 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0c4e83e2-715d-4418-a8b2-c4fe36f46192","Type":"ContainerDied","Data":"6b21cb3ec0897bda28d521355368be16f1b35a6982e702b2f8d7c436a917d111"} Oct 06 12:07:57 crc kubenswrapper[4698]: I1006 12:07:57.296320 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0c4e83e2-715d-4418-a8b2-c4fe36f46192","Type":"ContainerStarted","Data":"322fd9a21c2b09a8a2bd8dec1084411a4492e8c512473f6b97644560f8694dca"} Oct 06 12:07:57 crc kubenswrapper[4698]: I1006 12:07:57.300490 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:07:57 crc kubenswrapper[4698]: I1006 12:07:57.301109 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" event={"ID":"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9","Type":"ContainerStarted","Data":"2f47f89107302f0fb565f3ad483c7b07d04efd9a0e3cc6c5c1b2dd159fdf1edf"} Oct 06 12:07:57 crc kubenswrapper[4698]: I1006 12:07:57.303339 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"770a4197-e506-41c8-921b-31db7abd83fe","Type":"ContainerStarted","Data":"a7e82bf19ccd36a151e0ccf417369d72560fa71d36792c1f71b3c7936cf7b6e2"} Oct 06 12:07:57 crc kubenswrapper[4698]: I1006 12:07:57.303649 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 06 12:07:57 crc kubenswrapper[4698]: I1006 12:07:57.340436 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.340410678 podStartE2EDuration="38.340410678s" podCreationTimestamp="2025-10-06 12:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:07:57.333256762 +0000 UTC m=+1364.745948945" watchObservedRunningTime="2025-10-06 12:07:57.340410678 +0000 UTC m=+1364.753102861" Oct 06 12:07:57 crc kubenswrapper[4698]: I1006 12:07:57.354155 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" podStartSLOduration=2.059652683 podStartE2EDuration="11.354126917s" podCreationTimestamp="2025-10-06 12:07:46 +0000 UTC" firstStartedPulling="2025-10-06 12:07:47.445102803 +0000 UTC m=+1354.857794986" lastFinishedPulling="2025-10-06 12:07:56.739577047 +0000 UTC m=+1364.152269220" observedRunningTime="2025-10-06 12:07:57.353894682 +0000 UTC m=+1364.766586865" watchObservedRunningTime="2025-10-06 12:07:57.354126917 +0000 UTC m=+1364.766819100" Oct 06 12:07:57 crc kubenswrapper[4698]: I1006 12:07:57.401227 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.401201004 podStartE2EDuration="39.401201004s" podCreationTimestamp="2025-10-06 12:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:07:57.389288777 +0000 UTC m=+1364.801980960" watchObservedRunningTime="2025-10-06 12:07:57.401201004 +0000 UTC m=+1364.813893187" Oct 06 12:08:09 crc kubenswrapper[4698]: I1006 12:08:09.124618 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 06 12:08:09 crc kubenswrapper[4698]: I1006 12:08:09.468181 4698 generic.go:334] "Generic (PLEG): container finished" podID="4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9" containerID="2f47f89107302f0fb565f3ad483c7b07d04efd9a0e3cc6c5c1b2dd159fdf1edf" exitCode=0 Oct 06 12:08:09 crc kubenswrapper[4698]: I1006 12:08:09.468264 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" event={"ID":"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9","Type":"ContainerDied","Data":"2f47f89107302f0fb565f3ad483c7b07d04efd9a0e3cc6c5c1b2dd159fdf1edf"} Oct 06 12:08:10 crc kubenswrapper[4698]: I1006 12:08:10.121302 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.050305 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.161388 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j6fg\" (UniqueName: \"kubernetes.io/projected/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-kube-api-access-7j6fg\") pod \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.161927 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-inventory\") pod \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.161989 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-repo-setup-combined-ca-bundle\") pod \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.162069 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-ssh-key\") pod \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\" (UID: \"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9\") " Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.222516 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9" (UID: "4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.227668 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-kube-api-access-7j6fg" (OuterVolumeSpecName: "kube-api-access-7j6fg") pod "4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9" (UID: "4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9"). InnerVolumeSpecName "kube-api-access-7j6fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.264836 4698 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.264897 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j6fg\" (UniqueName: \"kubernetes.io/projected/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-kube-api-access-7j6fg\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.308264 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-inventory" (OuterVolumeSpecName: "inventory") pod "4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9" (UID: "4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.355711 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9" (UID: "4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.367625 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.367661 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.491488 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" event={"ID":"4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9","Type":"ContainerDied","Data":"3c9652a9284f3404b8e85c85d950a4c413cee4a2ae9b966fb17033a9e42c1b75"} Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.491534 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c9652a9284f3404b8e85c85d950a4c413cee4a2ae9b966fb17033a9e42c1b75" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.491579 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.596381 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl"] Oct 06 12:08:11 crc kubenswrapper[4698]: E1006 12:08:11.597083 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.597110 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.597410 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.598540 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.601202 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.601349 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.601561 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.601689 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.614470 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl"] Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.675071 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-975wl\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.675225 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7npq\" (UniqueName: \"kubernetes.io/projected/1aa6350f-22ad-49c6-b717-6b5db37d7b27-kube-api-access-h7npq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-975wl\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.675259 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-975wl\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.778708 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-975wl\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.778860 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7npq\" (UniqueName: \"kubernetes.io/projected/1aa6350f-22ad-49c6-b717-6b5db37d7b27-kube-api-access-h7npq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-975wl\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.778905 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-975wl\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.786325 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-975wl\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.787657 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-975wl\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.799123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7npq\" (UniqueName: \"kubernetes.io/projected/1aa6350f-22ad-49c6-b717-6b5db37d7b27-kube-api-access-h7npq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-975wl\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:11 crc kubenswrapper[4698]: I1006 12:08:11.923352 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:12 crc kubenswrapper[4698]: I1006 12:08:12.536739 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl"] Oct 06 12:08:13 crc kubenswrapper[4698]: I1006 12:08:13.520956 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" event={"ID":"1aa6350f-22ad-49c6-b717-6b5db37d7b27","Type":"ContainerStarted","Data":"aec3704b5229ff7c898c7ebe62af6b338043cba88197cdd4bd817315dfde520f"} Oct 06 12:08:13 crc kubenswrapper[4698]: I1006 12:08:13.521614 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" event={"ID":"1aa6350f-22ad-49c6-b717-6b5db37d7b27","Type":"ContainerStarted","Data":"e1802a8588ffde7c53961f80b988776cf6fa805b89cce5d3b03762e8c5d18f3b"} Oct 06 12:08:13 crc kubenswrapper[4698]: I1006 12:08:13.547473 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" podStartSLOduration=2.068702707 podStartE2EDuration="2.547451033s" podCreationTimestamp="2025-10-06 12:08:11 +0000 UTC" firstStartedPulling="2025-10-06 12:08:12.530815353 +0000 UTC m=+1379.943507526" lastFinishedPulling="2025-10-06 12:08:13.009563679 +0000 UTC m=+1380.422255852" observedRunningTime="2025-10-06 12:08:13.545446117 +0000 UTC m=+1380.958138300" watchObservedRunningTime="2025-10-06 12:08:13.547451033 +0000 UTC m=+1380.960143206" Oct 06 12:08:16 crc kubenswrapper[4698]: I1006 12:08:16.566419 4698 generic.go:334] "Generic (PLEG): container finished" podID="1aa6350f-22ad-49c6-b717-6b5db37d7b27" containerID="aec3704b5229ff7c898c7ebe62af6b338043cba88197cdd4bd817315dfde520f" exitCode=0 Oct 06 12:08:16 crc kubenswrapper[4698]: I1006 12:08:16.566524 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" event={"ID":"1aa6350f-22ad-49c6-b717-6b5db37d7b27","Type":"ContainerDied","Data":"aec3704b5229ff7c898c7ebe62af6b338043cba88197cdd4bd817315dfde520f"} Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.139468 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.250137 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-ssh-key\") pod \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.250219 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7npq\" (UniqueName: \"kubernetes.io/projected/1aa6350f-22ad-49c6-b717-6b5db37d7b27-kube-api-access-h7npq\") pod \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.250268 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-inventory\") pod \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\" (UID: \"1aa6350f-22ad-49c6-b717-6b5db37d7b27\") " Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.259977 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa6350f-22ad-49c6-b717-6b5db37d7b27-kube-api-access-h7npq" (OuterVolumeSpecName: "kube-api-access-h7npq") pod "1aa6350f-22ad-49c6-b717-6b5db37d7b27" (UID: "1aa6350f-22ad-49c6-b717-6b5db37d7b27"). InnerVolumeSpecName "kube-api-access-h7npq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.289239 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1aa6350f-22ad-49c6-b717-6b5db37d7b27" (UID: "1aa6350f-22ad-49c6-b717-6b5db37d7b27"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.296604 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-inventory" (OuterVolumeSpecName: "inventory") pod "1aa6350f-22ad-49c6-b717-6b5db37d7b27" (UID: "1aa6350f-22ad-49c6-b717-6b5db37d7b27"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.354328 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.354634 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7npq\" (UniqueName: \"kubernetes.io/projected/1aa6350f-22ad-49c6-b717-6b5db37d7b27-kube-api-access-h7npq\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.354701 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aa6350f-22ad-49c6-b717-6b5db37d7b27-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.601137 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" event={"ID":"1aa6350f-22ad-49c6-b717-6b5db37d7b27","Type":"ContainerDied","Data":"e1802a8588ffde7c53961f80b988776cf6fa805b89cce5d3b03762e8c5d18f3b"} Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.601189 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1802a8588ffde7c53961f80b988776cf6fa805b89cce5d3b03762e8c5d18f3b" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.601210 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-975wl" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.682708 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc"] Oct 06 12:08:18 crc kubenswrapper[4698]: E1006 12:08:18.683474 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa6350f-22ad-49c6-b717-6b5db37d7b27" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.683500 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa6350f-22ad-49c6-b717-6b5db37d7b27" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.683891 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa6350f-22ad-49c6-b717-6b5db37d7b27" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.685090 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.687614 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.688930 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.688958 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.689519 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.695568 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc"] Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.765745 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjln\" (UniqueName: \"kubernetes.io/projected/7a9dbb12-cd2b-4f3a-a602-35ae29132726-kube-api-access-shjln\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.766170 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.766281 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.766422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.869208 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shjln\" (UniqueName: \"kubernetes.io/projected/7a9dbb12-cd2b-4f3a-a602-35ae29132726-kube-api-access-shjln\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.869280 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.869344 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.869438 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.874209 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.877763 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.878577 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:18 crc kubenswrapper[4698]: I1006 12:08:18.896546 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shjln\" (UniqueName: \"kubernetes.io/projected/7a9dbb12-cd2b-4f3a-a602-35ae29132726-kube-api-access-shjln\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-885pc\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:19 crc kubenswrapper[4698]: I1006 12:08:19.003680 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:08:19 crc kubenswrapper[4698]: I1006 12:08:19.623153 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc"] Oct 06 12:08:19 crc kubenswrapper[4698]: W1006 12:08:19.630033 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a9dbb12_cd2b_4f3a_a602_35ae29132726.slice/crio-ee34eda67bb84d54a3e7bb69a507abb1758e397acc1d6b338bcec8a53fb50ca8 WatchSource:0}: Error finding container ee34eda67bb84d54a3e7bb69a507abb1758e397acc1d6b338bcec8a53fb50ca8: Status 404 returned error can't find the container with id ee34eda67bb84d54a3e7bb69a507abb1758e397acc1d6b338bcec8a53fb50ca8 Oct 06 12:08:20 crc kubenswrapper[4698]: I1006 12:08:20.637365 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" event={"ID":"7a9dbb12-cd2b-4f3a-a602-35ae29132726","Type":"ContainerStarted","Data":"ee34eda67bb84d54a3e7bb69a507abb1758e397acc1d6b338bcec8a53fb50ca8"} Oct 06 12:08:21 crc kubenswrapper[4698]: I1006 12:08:21.658040 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" event={"ID":"7a9dbb12-cd2b-4f3a-a602-35ae29132726","Type":"ContainerStarted","Data":"af08291f53b63abbfe3d45fa144603467a9ee07d0899a70491e6418fc6965faf"} Oct 06 12:08:21 crc kubenswrapper[4698]: I1006 12:08:21.700762 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" podStartSLOduration=2.053624224 podStartE2EDuration="3.700722056s" podCreationTimestamp="2025-10-06 12:08:18 +0000 UTC" firstStartedPulling="2025-10-06 12:08:19.633936714 +0000 UTC m=+1387.046628887" lastFinishedPulling="2025-10-06 12:08:21.281034536 +0000 UTC m=+1388.693726719" observedRunningTime="2025-10-06 12:08:21.681244692 +0000 UTC m=+1389.093936895" watchObservedRunningTime="2025-10-06 12:08:21.700722056 +0000 UTC m=+1389.113414269" Oct 06 12:08:39 crc kubenswrapper[4698]: I1006 12:08:39.808139 4698 scope.go:117] "RemoveContainer" containerID="0e016f31b0410320c63583466a7c4587439c6a195568ca82d47576d0b0949f32" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.426398 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bbl7x"] Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.430797 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.496935 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbl7x"] Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.531968 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d778\" (UniqueName: \"kubernetes.io/projected/4ebcf3f2-1626-4287-8685-403db8a82651-kube-api-access-4d778\") pod \"redhat-operators-bbl7x\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.532606 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-utilities\") pod \"redhat-operators-bbl7x\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.533142 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-catalog-content\") pod \"redhat-operators-bbl7x\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.636396 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-utilities\") pod \"redhat-operators-bbl7x\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.636559 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-catalog-content\") pod \"redhat-operators-bbl7x\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.636695 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d778\" (UniqueName: \"kubernetes.io/projected/4ebcf3f2-1626-4287-8685-403db8a82651-kube-api-access-4d778\") pod \"redhat-operators-bbl7x\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.637353 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-catalog-content\") pod \"redhat-operators-bbl7x\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.637433 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-utilities\") pod \"redhat-operators-bbl7x\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.668844 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d778\" (UniqueName: \"kubernetes.io/projected/4ebcf3f2-1626-4287-8685-403db8a82651-kube-api-access-4d778\") pod \"redhat-operators-bbl7x\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:06 crc kubenswrapper[4698]: I1006 12:09:06.762092 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:07 crc kubenswrapper[4698]: I1006 12:09:07.357220 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbl7x"] Oct 06 12:09:08 crc kubenswrapper[4698]: I1006 12:09:08.272383 4698 generic.go:334] "Generic (PLEG): container finished" podID="4ebcf3f2-1626-4287-8685-403db8a82651" containerID="2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07" exitCode=0 Oct 06 12:09:08 crc kubenswrapper[4698]: I1006 12:09:08.272536 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbl7x" event={"ID":"4ebcf3f2-1626-4287-8685-403db8a82651","Type":"ContainerDied","Data":"2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07"} Oct 06 12:09:08 crc kubenswrapper[4698]: I1006 12:09:08.272908 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbl7x" event={"ID":"4ebcf3f2-1626-4287-8685-403db8a82651","Type":"ContainerStarted","Data":"7bd101a6633d6ae06aaf7e700a6a5b47f4c5cf67ab158ba1d9a3d76201610239"} Oct 06 12:09:10 crc kubenswrapper[4698]: I1006 12:09:10.300606 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbl7x" event={"ID":"4ebcf3f2-1626-4287-8685-403db8a82651","Type":"ContainerStarted","Data":"cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a"} Oct 06 12:09:12 crc kubenswrapper[4698]: I1006 12:09:12.332698 4698 generic.go:334] "Generic (PLEG): container finished" podID="4ebcf3f2-1626-4287-8685-403db8a82651" containerID="cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a" exitCode=0 Oct 06 12:09:12 crc kubenswrapper[4698]: I1006 12:09:12.332787 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbl7x" event={"ID":"4ebcf3f2-1626-4287-8685-403db8a82651","Type":"ContainerDied","Data":"cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a"} Oct 06 12:09:13 crc kubenswrapper[4698]: I1006 12:09:13.356571 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbl7x" event={"ID":"4ebcf3f2-1626-4287-8685-403db8a82651","Type":"ContainerStarted","Data":"2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410"} Oct 06 12:09:13 crc kubenswrapper[4698]: I1006 12:09:13.398226 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bbl7x" podStartSLOduration=2.938625268 podStartE2EDuration="7.398192408s" podCreationTimestamp="2025-10-06 12:09:06 +0000 UTC" firstStartedPulling="2025-10-06 12:09:08.276970231 +0000 UTC m=+1435.689662414" lastFinishedPulling="2025-10-06 12:09:12.736537371 +0000 UTC m=+1440.149229554" observedRunningTime="2025-10-06 12:09:13.383712521 +0000 UTC m=+1440.796404734" watchObservedRunningTime="2025-10-06 12:09:13.398192408 +0000 UTC m=+1440.810884611" Oct 06 12:09:16 crc kubenswrapper[4698]: I1006 12:09:16.762853 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:16 crc kubenswrapper[4698]: I1006 12:09:16.763223 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:17 crc kubenswrapper[4698]: I1006 12:09:17.758599 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4xvtf"] Oct 06 12:09:17 crc kubenswrapper[4698]: I1006 12:09:17.761511 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:17 crc kubenswrapper[4698]: I1006 12:09:17.785303 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xvtf"] Oct 06 12:09:17 crc kubenswrapper[4698]: I1006 12:09:17.812187 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbl7x" podUID="4ebcf3f2-1626-4287-8685-403db8a82651" containerName="registry-server" probeResult="failure" output=< Oct 06 12:09:17 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 06 12:09:17 crc kubenswrapper[4698]: > Oct 06 12:09:17 crc kubenswrapper[4698]: I1006 12:09:17.932037 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-catalog-content\") pod \"community-operators-4xvtf\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:17 crc kubenswrapper[4698]: I1006 12:09:17.932746 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-utilities\") pod \"community-operators-4xvtf\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:17 crc kubenswrapper[4698]: I1006 12:09:17.932786 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dc5k\" (UniqueName: \"kubernetes.io/projected/03113237-8582-4db9-9786-ee25a3cef0cb-kube-api-access-7dc5k\") pod \"community-operators-4xvtf\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:18 crc kubenswrapper[4698]: I1006 12:09:18.034935 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-utilities\") pod \"community-operators-4xvtf\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:18 crc kubenswrapper[4698]: I1006 12:09:18.034994 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dc5k\" (UniqueName: \"kubernetes.io/projected/03113237-8582-4db9-9786-ee25a3cef0cb-kube-api-access-7dc5k\") pod \"community-operators-4xvtf\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:18 crc kubenswrapper[4698]: I1006 12:09:18.035062 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-catalog-content\") pod \"community-operators-4xvtf\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:18 crc kubenswrapper[4698]: I1006 12:09:18.035537 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-utilities\") pod \"community-operators-4xvtf\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:18 crc kubenswrapper[4698]: I1006 12:09:18.035569 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-catalog-content\") pod \"community-operators-4xvtf\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:18 crc kubenswrapper[4698]: I1006 12:09:18.066600 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dc5k\" (UniqueName: \"kubernetes.io/projected/03113237-8582-4db9-9786-ee25a3cef0cb-kube-api-access-7dc5k\") pod \"community-operators-4xvtf\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:18 crc kubenswrapper[4698]: I1006 12:09:18.093098 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:18 crc kubenswrapper[4698]: I1006 12:09:18.640639 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xvtf"] Oct 06 12:09:19 crc kubenswrapper[4698]: I1006 12:09:19.422778 4698 generic.go:334] "Generic (PLEG): container finished" podID="03113237-8582-4db9-9786-ee25a3cef0cb" containerID="cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d" exitCode=0 Oct 06 12:09:19 crc kubenswrapper[4698]: I1006 12:09:19.422897 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xvtf" event={"ID":"03113237-8582-4db9-9786-ee25a3cef0cb","Type":"ContainerDied","Data":"cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d"} Oct 06 12:09:19 crc kubenswrapper[4698]: I1006 12:09:19.423239 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xvtf" event={"ID":"03113237-8582-4db9-9786-ee25a3cef0cb","Type":"ContainerStarted","Data":"4a6cc838b50807595892c6698e832557cc68b1c220efb460bf187eb1e7b0138b"} Oct 06 12:09:21 crc kubenswrapper[4698]: I1006 12:09:21.453800 4698 generic.go:334] "Generic (PLEG): container finished" podID="03113237-8582-4db9-9786-ee25a3cef0cb" containerID="d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537" exitCode=0 Oct 06 12:09:21 crc kubenswrapper[4698]: I1006 12:09:21.453944 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xvtf" event={"ID":"03113237-8582-4db9-9786-ee25a3cef0cb","Type":"ContainerDied","Data":"d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537"} Oct 06 12:09:23 crc kubenswrapper[4698]: I1006 12:09:23.486369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xvtf" event={"ID":"03113237-8582-4db9-9786-ee25a3cef0cb","Type":"ContainerStarted","Data":"6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c"} Oct 06 12:09:23 crc kubenswrapper[4698]: I1006 12:09:23.515648 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4xvtf" podStartSLOduration=3.460080388 podStartE2EDuration="6.51562726s" podCreationTimestamp="2025-10-06 12:09:17 +0000 UTC" firstStartedPulling="2025-10-06 12:09:19.424753303 +0000 UTC m=+1446.837445476" lastFinishedPulling="2025-10-06 12:09:22.480300145 +0000 UTC m=+1449.892992348" observedRunningTime="2025-10-06 12:09:23.511946815 +0000 UTC m=+1450.924639018" watchObservedRunningTime="2025-10-06 12:09:23.51562726 +0000 UTC m=+1450.928319423" Oct 06 12:09:25 crc kubenswrapper[4698]: I1006 12:09:25.234959 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:09:25 crc kubenswrapper[4698]: I1006 12:09:25.235529 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:09:26 crc kubenswrapper[4698]: I1006 12:09:26.834952 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:26 crc kubenswrapper[4698]: I1006 12:09:26.924709 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:27 crc kubenswrapper[4698]: I1006 12:09:27.096696 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbl7x"] Oct 06 12:09:28 crc kubenswrapper[4698]: I1006 12:09:28.093286 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:28 crc kubenswrapper[4698]: I1006 12:09:28.093338 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:28 crc kubenswrapper[4698]: I1006 12:09:28.165048 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:28 crc kubenswrapper[4698]: I1006 12:09:28.558720 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bbl7x" podUID="4ebcf3f2-1626-4287-8685-403db8a82651" containerName="registry-server" containerID="cri-o://2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410" gracePeriod=2 Oct 06 12:09:28 crc kubenswrapper[4698]: I1006 12:09:28.638774 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.108165 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.279145 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-utilities\") pod \"4ebcf3f2-1626-4287-8685-403db8a82651\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.280524 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d778\" (UniqueName: \"kubernetes.io/projected/4ebcf3f2-1626-4287-8685-403db8a82651-kube-api-access-4d778\") pod \"4ebcf3f2-1626-4287-8685-403db8a82651\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.280707 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-catalog-content\") pod \"4ebcf3f2-1626-4287-8685-403db8a82651\" (UID: \"4ebcf3f2-1626-4287-8685-403db8a82651\") " Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.280355 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-utilities" (OuterVolumeSpecName: "utilities") pod "4ebcf3f2-1626-4287-8685-403db8a82651" (UID: "4ebcf3f2-1626-4287-8685-403db8a82651"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.295216 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebcf3f2-1626-4287-8685-403db8a82651-kube-api-access-4d778" (OuterVolumeSpecName: "kube-api-access-4d778") pod "4ebcf3f2-1626-4287-8685-403db8a82651" (UID: "4ebcf3f2-1626-4287-8685-403db8a82651"). InnerVolumeSpecName "kube-api-access-4d778". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.383174 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.383207 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d778\" (UniqueName: \"kubernetes.io/projected/4ebcf3f2-1626-4287-8685-403db8a82651-kube-api-access-4d778\") on node \"crc\" DevicePath \"\"" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.420355 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ebcf3f2-1626-4287-8685-403db8a82651" (UID: "4ebcf3f2-1626-4287-8685-403db8a82651"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.485757 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ebcf3f2-1626-4287-8685-403db8a82651-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.574993 4698 generic.go:334] "Generic (PLEG): container finished" podID="4ebcf3f2-1626-4287-8685-403db8a82651" containerID="2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410" exitCode=0 Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.575105 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbl7x" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.575082 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbl7x" event={"ID":"4ebcf3f2-1626-4287-8685-403db8a82651","Type":"ContainerDied","Data":"2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410"} Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.575273 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbl7x" event={"ID":"4ebcf3f2-1626-4287-8685-403db8a82651","Type":"ContainerDied","Data":"7bd101a6633d6ae06aaf7e700a6a5b47f4c5cf67ab158ba1d9a3d76201610239"} Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.575398 4698 scope.go:117] "RemoveContainer" containerID="2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.624241 4698 scope.go:117] "RemoveContainer" containerID="cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.624471 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbl7x"] Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.631283 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bbl7x"] Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.652189 4698 scope.go:117] "RemoveContainer" containerID="2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.692108 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xvtf"] Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.706685 4698 scope.go:117] "RemoveContainer" containerID="2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410" Oct 06 12:09:29 crc kubenswrapper[4698]: E1006 12:09:29.707683 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410\": container with ID starting with 2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410 not found: ID does not exist" containerID="2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.707724 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410"} err="failed to get container status \"2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410\": rpc error: code = NotFound desc = could not find container \"2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410\": container with ID starting with 2cc29851eb7863d77d576b843ceeb4ed421c49c5bbe4551d79d9458909e83410 not found: ID does not exist" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.707755 4698 scope.go:117] "RemoveContainer" containerID="cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a" Oct 06 12:09:29 crc kubenswrapper[4698]: E1006 12:09:29.708372 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a\": container with ID starting with cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a not found: ID does not exist" containerID="cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.708448 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a"} err="failed to get container status \"cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a\": rpc error: code = NotFound desc = could not find container \"cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a\": container with ID starting with cf27e87d21a2afcd74ee9c0ac73b1dc667654828580acb3fe83935dcd8a9998a not found: ID does not exist" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.708503 4698 scope.go:117] "RemoveContainer" containerID="2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07" Oct 06 12:09:29 crc kubenswrapper[4698]: E1006 12:09:29.709137 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07\": container with ID starting with 2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07 not found: ID does not exist" containerID="2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07" Oct 06 12:09:29 crc kubenswrapper[4698]: I1006 12:09:29.709180 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07"} err="failed to get container status \"2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07\": rpc error: code = NotFound desc = could not find container \"2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07\": container with ID starting with 2cfad7f14981bdb24492838a0ebfa7f8e4a78bcead5572b1197fe26fcaa53a07 not found: ID does not exist" Oct 06 12:09:30 crc kubenswrapper[4698]: I1006 12:09:30.600078 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4xvtf" podUID="03113237-8582-4db9-9786-ee25a3cef0cb" containerName="registry-server" containerID="cri-o://6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c" gracePeriod=2 Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.191472 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.334556 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dc5k\" (UniqueName: \"kubernetes.io/projected/03113237-8582-4db9-9786-ee25a3cef0cb-kube-api-access-7dc5k\") pod \"03113237-8582-4db9-9786-ee25a3cef0cb\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.335288 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-utilities\") pod \"03113237-8582-4db9-9786-ee25a3cef0cb\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.335399 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-catalog-content\") pod \"03113237-8582-4db9-9786-ee25a3cef0cb\" (UID: \"03113237-8582-4db9-9786-ee25a3cef0cb\") " Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.337836 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-utilities" (OuterVolumeSpecName: "utilities") pod "03113237-8582-4db9-9786-ee25a3cef0cb" (UID: "03113237-8582-4db9-9786-ee25a3cef0cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.345207 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ebcf3f2-1626-4287-8685-403db8a82651" path="/var/lib/kubelet/pods/4ebcf3f2-1626-4287-8685-403db8a82651/volumes" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.345520 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03113237-8582-4db9-9786-ee25a3cef0cb-kube-api-access-7dc5k" (OuterVolumeSpecName: "kube-api-access-7dc5k") pod "03113237-8582-4db9-9786-ee25a3cef0cb" (UID: "03113237-8582-4db9-9786-ee25a3cef0cb"). InnerVolumeSpecName "kube-api-access-7dc5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.393686 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03113237-8582-4db9-9786-ee25a3cef0cb" (UID: "03113237-8582-4db9-9786-ee25a3cef0cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.438978 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.439055 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03113237-8582-4db9-9786-ee25a3cef0cb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.439070 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dc5k\" (UniqueName: \"kubernetes.io/projected/03113237-8582-4db9-9786-ee25a3cef0cb-kube-api-access-7dc5k\") on node \"crc\" DevicePath \"\"" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.618874 4698 generic.go:334] "Generic (PLEG): container finished" podID="03113237-8582-4db9-9786-ee25a3cef0cb" containerID="6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c" exitCode=0 Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.618943 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xvtf" event={"ID":"03113237-8582-4db9-9786-ee25a3cef0cb","Type":"ContainerDied","Data":"6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c"} Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.618994 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xvtf" event={"ID":"03113237-8582-4db9-9786-ee25a3cef0cb","Type":"ContainerDied","Data":"4a6cc838b50807595892c6698e832557cc68b1c220efb460bf187eb1e7b0138b"} Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.619039 4698 scope.go:117] "RemoveContainer" containerID="6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.619040 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xvtf" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.679166 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xvtf"] Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.690957 4698 scope.go:117] "RemoveContainer" containerID="d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.697571 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4xvtf"] Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.731244 4698 scope.go:117] "RemoveContainer" containerID="cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.789732 4698 scope.go:117] "RemoveContainer" containerID="6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c" Oct 06 12:09:31 crc kubenswrapper[4698]: E1006 12:09:31.790554 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c\": container with ID starting with 6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c not found: ID does not exist" containerID="6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.790625 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c"} err="failed to get container status \"6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c\": rpc error: code = NotFound desc = could not find container \"6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c\": container with ID starting with 6a35c6c7a1bc5debfb1c9f61eeaec1be787abc50c969ede4672f3033e52e1d3c not found: ID does not exist" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.790667 4698 scope.go:117] "RemoveContainer" containerID="d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537" Oct 06 12:09:31 crc kubenswrapper[4698]: E1006 12:09:31.791548 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537\": container with ID starting with d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537 not found: ID does not exist" containerID="d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.791628 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537"} err="failed to get container status \"d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537\": rpc error: code = NotFound desc = could not find container \"d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537\": container with ID starting with d73ac2fef9044a768fddfb6327e98fdbc1d2726774ba7d66bd596db2be0e4537 not found: ID does not exist" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.791679 4698 scope.go:117] "RemoveContainer" containerID="cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d" Oct 06 12:09:31 crc kubenswrapper[4698]: E1006 12:09:31.792228 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d\": container with ID starting with cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d not found: ID does not exist" containerID="cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d" Oct 06 12:09:31 crc kubenswrapper[4698]: I1006 12:09:31.792265 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d"} err="failed to get container status \"cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d\": rpc error: code = NotFound desc = could not find container \"cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d\": container with ID starting with cdf4c83007d73b2fdd87c57dad8f0830657c138ec500640048eb190c9538871d not found: ID does not exist" Oct 06 12:09:33 crc kubenswrapper[4698]: I1006 12:09:33.354441 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03113237-8582-4db9-9786-ee25a3cef0cb" path="/var/lib/kubelet/pods/03113237-8582-4db9-9786-ee25a3cef0cb/volumes" Oct 06 12:09:39 crc kubenswrapper[4698]: I1006 12:09:39.894335 4698 scope.go:117] "RemoveContainer" containerID="001abc23485f32f08cce21908ea37a8ab5c4b8a32a1885c412b12faf1b97a0a6" Oct 06 12:09:39 crc kubenswrapper[4698]: I1006 12:09:39.970378 4698 scope.go:117] "RemoveContainer" containerID="cf434a5fd4b064e31ddd2d029b4719de2f7d3a0251f98dec49f01db760bfd68c" Oct 06 12:09:55 crc kubenswrapper[4698]: I1006 12:09:55.235829 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:09:55 crc kubenswrapper[4698]: I1006 12:09:55.236753 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.630670 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zt6tm"] Oct 06 12:10:00 crc kubenswrapper[4698]: E1006 12:10:00.631523 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebcf3f2-1626-4287-8685-403db8a82651" containerName="extract-utilities" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.631538 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebcf3f2-1626-4287-8685-403db8a82651" containerName="extract-utilities" Oct 06 12:10:00 crc kubenswrapper[4698]: E1006 12:10:00.631560 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03113237-8582-4db9-9786-ee25a3cef0cb" containerName="extract-content" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.631568 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="03113237-8582-4db9-9786-ee25a3cef0cb" containerName="extract-content" Oct 06 12:10:00 crc kubenswrapper[4698]: E1006 12:10:00.631591 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03113237-8582-4db9-9786-ee25a3cef0cb" containerName="extract-utilities" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.631598 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="03113237-8582-4db9-9786-ee25a3cef0cb" containerName="extract-utilities" Oct 06 12:10:00 crc kubenswrapper[4698]: E1006 12:10:00.631610 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03113237-8582-4db9-9786-ee25a3cef0cb" containerName="registry-server" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.631616 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="03113237-8582-4db9-9786-ee25a3cef0cb" containerName="registry-server" Oct 06 12:10:00 crc kubenswrapper[4698]: E1006 12:10:00.631645 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebcf3f2-1626-4287-8685-403db8a82651" containerName="registry-server" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.631651 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebcf3f2-1626-4287-8685-403db8a82651" containerName="registry-server" Oct 06 12:10:00 crc kubenswrapper[4698]: E1006 12:10:00.631665 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebcf3f2-1626-4287-8685-403db8a82651" containerName="extract-content" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.631671 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebcf3f2-1626-4287-8685-403db8a82651" containerName="extract-content" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.631892 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="03113237-8582-4db9-9786-ee25a3cef0cb" containerName="registry-server" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.631918 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebcf3f2-1626-4287-8685-403db8a82651" containerName="registry-server" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.633666 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.662438 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zt6tm"] Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.738446 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-catalog-content\") pod \"certified-operators-zt6tm\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.738767 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6kv6\" (UniqueName: \"kubernetes.io/projected/f2f6d6ca-969c-4356-bd0e-3607ca191d19-kube-api-access-g6kv6\") pod \"certified-operators-zt6tm\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.738812 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-utilities\") pod \"certified-operators-zt6tm\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.840933 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6kv6\" (UniqueName: \"kubernetes.io/projected/f2f6d6ca-969c-4356-bd0e-3607ca191d19-kube-api-access-g6kv6\") pod \"certified-operators-zt6tm\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.840988 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-utilities\") pod \"certified-operators-zt6tm\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.841221 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-catalog-content\") pod \"certified-operators-zt6tm\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.841791 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-catalog-content\") pod \"certified-operators-zt6tm\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.842274 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-utilities\") pod \"certified-operators-zt6tm\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.878634 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6kv6\" (UniqueName: \"kubernetes.io/projected/f2f6d6ca-969c-4356-bd0e-3607ca191d19-kube-api-access-g6kv6\") pod \"certified-operators-zt6tm\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:00 crc kubenswrapper[4698]: I1006 12:10:00.962435 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:01 crc kubenswrapper[4698]: W1006 12:10:01.562522 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f6d6ca_969c_4356_bd0e_3607ca191d19.slice/crio-5f5b2bd8b5a4f2a5e49782836c34de6792816e6f25ee764c328d53e3128770f8 WatchSource:0}: Error finding container 5f5b2bd8b5a4f2a5e49782836c34de6792816e6f25ee764c328d53e3128770f8: Status 404 returned error can't find the container with id 5f5b2bd8b5a4f2a5e49782836c34de6792816e6f25ee764c328d53e3128770f8 Oct 06 12:10:01 crc kubenswrapper[4698]: I1006 12:10:01.567246 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zt6tm"] Oct 06 12:10:02 crc kubenswrapper[4698]: I1006 12:10:02.065514 4698 generic.go:334] "Generic (PLEG): container finished" podID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerID="14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5" exitCode=0 Oct 06 12:10:02 crc kubenswrapper[4698]: I1006 12:10:02.065726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt6tm" event={"ID":"f2f6d6ca-969c-4356-bd0e-3607ca191d19","Type":"ContainerDied","Data":"14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5"} Oct 06 12:10:02 crc kubenswrapper[4698]: I1006 12:10:02.066033 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt6tm" event={"ID":"f2f6d6ca-969c-4356-bd0e-3607ca191d19","Type":"ContainerStarted","Data":"5f5b2bd8b5a4f2a5e49782836c34de6792816e6f25ee764c328d53e3128770f8"} Oct 06 12:10:02 crc kubenswrapper[4698]: I1006 12:10:02.070111 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:10:04 crc kubenswrapper[4698]: I1006 12:10:04.107071 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt6tm" event={"ID":"f2f6d6ca-969c-4356-bd0e-3607ca191d19","Type":"ContainerStarted","Data":"e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d"} Oct 06 12:10:05 crc kubenswrapper[4698]: I1006 12:10:05.128257 4698 generic.go:334] "Generic (PLEG): container finished" podID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerID="e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d" exitCode=0 Oct 06 12:10:05 crc kubenswrapper[4698]: I1006 12:10:05.128553 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt6tm" event={"ID":"f2f6d6ca-969c-4356-bd0e-3607ca191d19","Type":"ContainerDied","Data":"e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d"} Oct 06 12:10:07 crc kubenswrapper[4698]: I1006 12:10:07.159227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt6tm" event={"ID":"f2f6d6ca-969c-4356-bd0e-3607ca191d19","Type":"ContainerStarted","Data":"53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc"} Oct 06 12:10:10 crc kubenswrapper[4698]: I1006 12:10:10.963048 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:10 crc kubenswrapper[4698]: I1006 12:10:10.965343 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:11 crc kubenswrapper[4698]: I1006 12:10:11.036793 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:11 crc kubenswrapper[4698]: I1006 12:10:11.063213 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zt6tm" podStartSLOduration=7.095128704 podStartE2EDuration="11.063180999s" podCreationTimestamp="2025-10-06 12:10:00 +0000 UTC" firstStartedPulling="2025-10-06 12:10:02.069377681 +0000 UTC m=+1489.482069894" lastFinishedPulling="2025-10-06 12:10:06.037430026 +0000 UTC m=+1493.450122189" observedRunningTime="2025-10-06 12:10:07.194447107 +0000 UTC m=+1494.607139280" watchObservedRunningTime="2025-10-06 12:10:11.063180999 +0000 UTC m=+1498.475873212" Oct 06 12:10:11 crc kubenswrapper[4698]: I1006 12:10:11.275766 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:11 crc kubenswrapper[4698]: I1006 12:10:11.343411 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zt6tm"] Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.234261 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zt6tm" podUID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerName="registry-server" containerID="cri-o://53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc" gracePeriod=2 Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.777981 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.819119 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6kv6\" (UniqueName: \"kubernetes.io/projected/f2f6d6ca-969c-4356-bd0e-3607ca191d19-kube-api-access-g6kv6\") pod \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.819288 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-utilities\") pod \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.819574 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-catalog-content\") pod \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\" (UID: \"f2f6d6ca-969c-4356-bd0e-3607ca191d19\") " Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.820196 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-utilities" (OuterVolumeSpecName: "utilities") pod "f2f6d6ca-969c-4356-bd0e-3607ca191d19" (UID: "f2f6d6ca-969c-4356-bd0e-3607ca191d19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.820957 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.827476 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f6d6ca-969c-4356-bd0e-3607ca191d19-kube-api-access-g6kv6" (OuterVolumeSpecName: "kube-api-access-g6kv6") pod "f2f6d6ca-969c-4356-bd0e-3607ca191d19" (UID: "f2f6d6ca-969c-4356-bd0e-3607ca191d19"). InnerVolumeSpecName "kube-api-access-g6kv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.872710 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2f6d6ca-969c-4356-bd0e-3607ca191d19" (UID: "f2f6d6ca-969c-4356-bd0e-3607ca191d19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.923604 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f6d6ca-969c-4356-bd0e-3607ca191d19-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:10:13 crc kubenswrapper[4698]: I1006 12:10:13.923649 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6kv6\" (UniqueName: \"kubernetes.io/projected/f2f6d6ca-969c-4356-bd0e-3607ca191d19-kube-api-access-g6kv6\") on node \"crc\" DevicePath \"\"" Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.267069 4698 generic.go:334] "Generic (PLEG): container finished" podID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerID="53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc" exitCode=0 Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.267158 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt6tm" event={"ID":"f2f6d6ca-969c-4356-bd0e-3607ca191d19","Type":"ContainerDied","Data":"53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc"} Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.267223 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt6tm" event={"ID":"f2f6d6ca-969c-4356-bd0e-3607ca191d19","Type":"ContainerDied","Data":"5f5b2bd8b5a4f2a5e49782836c34de6792816e6f25ee764c328d53e3128770f8"} Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.267226 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt6tm" Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.267256 4698 scope.go:117] "RemoveContainer" containerID="53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc" Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.313585 4698 scope.go:117] "RemoveContainer" containerID="e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d" Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.329983 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zt6tm"] Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.345187 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zt6tm"] Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.359930 4698 scope.go:117] "RemoveContainer" containerID="14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5" Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.410191 4698 scope.go:117] "RemoveContainer" containerID="53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc" Oct 06 12:10:14 crc kubenswrapper[4698]: E1006 12:10:14.410634 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc\": container with ID starting with 53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc not found: ID does not exist" containerID="53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc" Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.410718 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc"} err="failed to get container status \"53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc\": rpc error: code = NotFound desc = could not find container \"53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc\": container with ID starting with 53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc not found: ID does not exist" Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.410763 4698 scope.go:117] "RemoveContainer" containerID="e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d" Oct 06 12:10:14 crc kubenswrapper[4698]: E1006 12:10:14.411169 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d\": container with ID starting with e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d not found: ID does not exist" containerID="e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d" Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.411209 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d"} err="failed to get container status \"e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d\": rpc error: code = NotFound desc = could not find container \"e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d\": container with ID starting with e304f410a6258de23dc2529b81c248a73613138187ed9ea14c3bd82acb38329d not found: ID does not exist" Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.411231 4698 scope.go:117] "RemoveContainer" containerID="14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5" Oct 06 12:10:14 crc kubenswrapper[4698]: E1006 12:10:14.411509 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5\": container with ID starting with 14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5 not found: ID does not exist" containerID="14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5" Oct 06 12:10:14 crc kubenswrapper[4698]: I1006 12:10:14.411630 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5"} err="failed to get container status \"14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5\": rpc error: code = NotFound desc = could not find container \"14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5\": container with ID starting with 14bb6e2c1328a37af6622e4b403f14b68570598342c1f188649fbce18444aff5 not found: ID does not exist" Oct 06 12:10:15 crc kubenswrapper[4698]: I1006 12:10:15.350425 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" path="/var/lib/kubelet/pods/f2f6d6ca-969c-4356-bd0e-3607ca191d19/volumes" Oct 06 12:10:16 crc kubenswrapper[4698]: E1006 12:10:16.309991 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f6d6ca_969c_4356_bd0e_3607ca191d19.slice/crio-conmon-53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:10:25 crc kubenswrapper[4698]: I1006 12:10:25.235089 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:10:25 crc kubenswrapper[4698]: I1006 12:10:25.235782 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:10:25 crc kubenswrapper[4698]: I1006 12:10:25.235832 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:10:25 crc kubenswrapper[4698]: I1006 12:10:25.236721 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:10:25 crc kubenswrapper[4698]: I1006 12:10:25.236851 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" gracePeriod=600 Oct 06 12:10:25 crc kubenswrapper[4698]: E1006 12:10:25.374220 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:10:25 crc kubenswrapper[4698]: I1006 12:10:25.446807 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" exitCode=0 Oct 06 12:10:25 crc kubenswrapper[4698]: I1006 12:10:25.446859 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50"} Oct 06 12:10:25 crc kubenswrapper[4698]: I1006 12:10:25.447146 4698 scope.go:117] "RemoveContainer" containerID="715e2c926ea733c39c4353502c94d954bc502215a13b1b5dd34c48e59ae896f3" Oct 06 12:10:25 crc kubenswrapper[4698]: I1006 12:10:25.448395 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:10:25 crc kubenswrapper[4698]: E1006 12:10:25.448829 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:10:26 crc kubenswrapper[4698]: E1006 12:10:26.678928 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f6d6ca_969c_4356_bd0e_3607ca191d19.slice/crio-conmon-53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:10:37 crc kubenswrapper[4698]: E1006 12:10:37.008404 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f6d6ca_969c_4356_bd0e_3607ca191d19.slice/crio-conmon-53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Oct 06 12:10:40 crc kubenswrapper[4698]: I1006 12:10:40.330060 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:10:40 crc kubenswrapper[4698]: E1006 12:10:40.330995 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:10:47 crc kubenswrapper[4698]: E1006 12:10:47.265978 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f6d6ca_969c_4356_bd0e_3607ca191d19.slice/crio-conmon-53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:10:55 crc kubenswrapper[4698]: I1006 12:10:55.329546 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:10:55 crc kubenswrapper[4698]: E1006 12:10:55.330245 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:10:57 crc kubenswrapper[4698]: E1006 12:10:57.611276 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f6d6ca_969c_4356_bd0e_3607ca191d19.slice/crio-conmon-53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:11:07 crc kubenswrapper[4698]: E1006 12:11:07.891554 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f6d6ca_969c_4356_bd0e_3607ca191d19.slice/crio-conmon-53ffb620dc4affa34eb0f9ad171bac2749bc4cb112e9f5abe69244a1a5539cbc.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:11:10 crc kubenswrapper[4698]: I1006 12:11:10.330179 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:11:10 crc kubenswrapper[4698]: E1006 12:11:10.333071 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:11:25 crc kubenswrapper[4698]: I1006 12:11:25.329443 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:11:25 crc kubenswrapper[4698]: E1006 12:11:25.330905 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:11:33 crc kubenswrapper[4698]: I1006 12:11:33.365863 4698 generic.go:334] "Generic (PLEG): container finished" podID="7a9dbb12-cd2b-4f3a-a602-35ae29132726" containerID="af08291f53b63abbfe3d45fa144603467a9ee07d0899a70491e6418fc6965faf" exitCode=0 Oct 06 12:11:33 crc kubenswrapper[4698]: I1006 12:11:33.366006 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" event={"ID":"7a9dbb12-cd2b-4f3a-a602-35ae29132726","Type":"ContainerDied","Data":"af08291f53b63abbfe3d45fa144603467a9ee07d0899a70491e6418fc6965faf"} Oct 06 12:11:34 crc kubenswrapper[4698]: I1006 12:11:34.980126 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.089002 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-bootstrap-combined-ca-bundle\") pod \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.089127 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-ssh-key\") pod \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.089385 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shjln\" (UniqueName: \"kubernetes.io/projected/7a9dbb12-cd2b-4f3a-a602-35ae29132726-kube-api-access-shjln\") pod \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.089448 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-inventory\") pod \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\" (UID: \"7a9dbb12-cd2b-4f3a-a602-35ae29132726\") " Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.097365 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9dbb12-cd2b-4f3a-a602-35ae29132726-kube-api-access-shjln" (OuterVolumeSpecName: "kube-api-access-shjln") pod "7a9dbb12-cd2b-4f3a-a602-35ae29132726" (UID: "7a9dbb12-cd2b-4f3a-a602-35ae29132726"). InnerVolumeSpecName "kube-api-access-shjln". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.103869 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7a9dbb12-cd2b-4f3a-a602-35ae29132726" (UID: "7a9dbb12-cd2b-4f3a-a602-35ae29132726"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.160631 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7a9dbb12-cd2b-4f3a-a602-35ae29132726" (UID: "7a9dbb12-cd2b-4f3a-a602-35ae29132726"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.170964 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-inventory" (OuterVolumeSpecName: "inventory") pod "7a9dbb12-cd2b-4f3a-a602-35ae29132726" (UID: "7a9dbb12-cd2b-4f3a-a602-35ae29132726"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.192580 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shjln\" (UniqueName: \"kubernetes.io/projected/7a9dbb12-cd2b-4f3a-a602-35ae29132726-kube-api-access-shjln\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.192983 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.192997 4698 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.193012 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a9dbb12-cd2b-4f3a-a602-35ae29132726-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.399816 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" event={"ID":"7a9dbb12-cd2b-4f3a-a602-35ae29132726","Type":"ContainerDied","Data":"ee34eda67bb84d54a3e7bb69a507abb1758e397acc1d6b338bcec8a53fb50ca8"} Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.399883 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee34eda67bb84d54a3e7bb69a507abb1758e397acc1d6b338bcec8a53fb50ca8" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.399911 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-885pc" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.516887 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx"] Oct 06 12:11:35 crc kubenswrapper[4698]: E1006 12:11:35.517577 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerName="extract-utilities" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.517610 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerName="extract-utilities" Oct 06 12:11:35 crc kubenswrapper[4698]: E1006 12:11:35.517655 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerName="registry-server" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.517671 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerName="registry-server" Oct 06 12:11:35 crc kubenswrapper[4698]: E1006 12:11:35.517711 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9dbb12-cd2b-4f3a-a602-35ae29132726" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.517724 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9dbb12-cd2b-4f3a-a602-35ae29132726" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 12:11:35 crc kubenswrapper[4698]: E1006 12:11:35.517779 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerName="extract-content" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.517791 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerName="extract-content" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.523703 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2f6d6ca-969c-4356-bd0e-3607ca191d19" containerName="registry-server" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.523746 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9dbb12-cd2b-4f3a-a602-35ae29132726" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.524717 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.530752 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.531098 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.531440 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.531654 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.536349 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx"] Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.706122 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5s8h\" (UniqueName: \"kubernetes.io/projected/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-kube-api-access-w5s8h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.706427 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.706959 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.809895 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.810117 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.810254 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5s8h\" (UniqueName: \"kubernetes.io/projected/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-kube-api-access-w5s8h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.819642 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.833615 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.838010 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5s8h\" (UniqueName: \"kubernetes.io/projected/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-kube-api-access-w5s8h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:35 crc kubenswrapper[4698]: I1006 12:11:35.865600 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:11:36 crc kubenswrapper[4698]: I1006 12:11:36.340936 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx"] Oct 06 12:11:36 crc kubenswrapper[4698]: W1006 12:11:36.347716 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb95c9b2_ec91_415c_851c_1d10cd61f0f4.slice/crio-bc1c3b83fe27d3352f91b6a2c0e61d11db326265a160629bd3b6efc02445deea WatchSource:0}: Error finding container bc1c3b83fe27d3352f91b6a2c0e61d11db326265a160629bd3b6efc02445deea: Status 404 returned error can't find the container with id bc1c3b83fe27d3352f91b6a2c0e61d11db326265a160629bd3b6efc02445deea Oct 06 12:11:36 crc kubenswrapper[4698]: I1006 12:11:36.414923 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" event={"ID":"cb95c9b2-ec91-415c-851c-1d10cd61f0f4","Type":"ContainerStarted","Data":"bc1c3b83fe27d3352f91b6a2c0e61d11db326265a160629bd3b6efc02445deea"} Oct 06 12:11:37 crc kubenswrapper[4698]: I1006 12:11:37.329891 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:11:37 crc kubenswrapper[4698]: E1006 12:11:37.331201 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:11:37 crc kubenswrapper[4698]: I1006 12:11:37.427917 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" event={"ID":"cb95c9b2-ec91-415c-851c-1d10cd61f0f4","Type":"ContainerStarted","Data":"d819a2e7d254a141cc5e09db5d03e2739c151a709592fa4dc2a8e8e12aeffe75"} Oct 06 12:11:37 crc kubenswrapper[4698]: I1006 12:11:37.457335 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" podStartSLOduration=1.822111233 podStartE2EDuration="2.457316765s" podCreationTimestamp="2025-10-06 12:11:35 +0000 UTC" firstStartedPulling="2025-10-06 12:11:36.351012995 +0000 UTC m=+1583.763705188" lastFinishedPulling="2025-10-06 12:11:36.986218537 +0000 UTC m=+1584.398910720" observedRunningTime="2025-10-06 12:11:37.453884425 +0000 UTC m=+1584.866576608" watchObservedRunningTime="2025-10-06 12:11:37.457316765 +0000 UTC m=+1584.870008938" Oct 06 12:11:40 crc kubenswrapper[4698]: I1006 12:11:40.181161 4698 scope.go:117] "RemoveContainer" containerID="dc017e1b923a798a9570b294f0e60590f5e853e8c6c601c23cb81124e5b861e3" Oct 06 12:11:40 crc kubenswrapper[4698]: I1006 12:11:40.209436 4698 scope.go:117] "RemoveContainer" containerID="c28c4c51da742f5427d5f2ff1a4aac7320a12f96d6774923c5fbbbcab51acc90" Oct 06 12:11:49 crc kubenswrapper[4698]: I1006 12:11:49.072518 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-f7cgj"] Oct 06 12:11:49 crc kubenswrapper[4698]: I1006 12:11:49.095688 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-f7cgj"] Oct 06 12:11:49 crc kubenswrapper[4698]: I1006 12:11:49.346055 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="452481c1-a46c-47b7-ab29-6a3b3628197d" path="/var/lib/kubelet/pods/452481c1-a46c-47b7-ab29-6a3b3628197d/volumes" Oct 06 12:11:52 crc kubenswrapper[4698]: I1006 12:11:52.329530 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:11:52 crc kubenswrapper[4698]: E1006 12:11:52.330371 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:11:55 crc kubenswrapper[4698]: I1006 12:11:55.052507 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-c4w74"] Oct 06 12:11:55 crc kubenswrapper[4698]: I1006 12:11:55.065187 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-c4w74"] Oct 06 12:11:55 crc kubenswrapper[4698]: I1006 12:11:55.345573 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b244c105-6d87-4a82-865f-a9304464b946" path="/var/lib/kubelet/pods/b244c105-6d87-4a82-865f-a9304464b946/volumes" Oct 06 12:11:56 crc kubenswrapper[4698]: I1006 12:11:56.045713 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rn46p"] Oct 06 12:11:56 crc kubenswrapper[4698]: I1006 12:11:56.065727 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rn46p"] Oct 06 12:11:57 crc kubenswrapper[4698]: I1006 12:11:57.035826 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-sb8gh"] Oct 06 12:11:57 crc kubenswrapper[4698]: I1006 12:11:57.047425 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-sb8gh"] Oct 06 12:11:57 crc kubenswrapper[4698]: I1006 12:11:57.346205 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789c653a-f797-4245-8754-0de0cd335997" path="/var/lib/kubelet/pods/789c653a-f797-4245-8754-0de0cd335997/volumes" Oct 06 12:11:57 crc kubenswrapper[4698]: I1006 12:11:57.346976 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0d18d85-a449-4bc6-9bc2-ba89b71e9125" path="/var/lib/kubelet/pods/f0d18d85-a449-4bc6-9bc2-ba89b71e9125/volumes" Oct 06 12:12:03 crc kubenswrapper[4698]: I1006 12:12:03.342273 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:12:03 crc kubenswrapper[4698]: E1006 12:12:03.343962 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:12:06 crc kubenswrapper[4698]: I1006 12:12:06.063312 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f541-account-create-hq766"] Oct 06 12:12:06 crc kubenswrapper[4698]: I1006 12:12:06.076010 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-bd25-account-create-hrjlh"] Oct 06 12:12:06 crc kubenswrapper[4698]: I1006 12:12:06.091171 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f541-account-create-hq766"] Oct 06 12:12:06 crc kubenswrapper[4698]: I1006 12:12:06.100940 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-bd25-account-create-hrjlh"] Oct 06 12:12:07 crc kubenswrapper[4698]: I1006 12:12:07.047898 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f5be-account-create-w2bq6"] Oct 06 12:12:07 crc kubenswrapper[4698]: I1006 12:12:07.062831 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f5be-account-create-w2bq6"] Oct 06 12:12:07 crc kubenswrapper[4698]: I1006 12:12:07.347709 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272010f5-1745-47b9-bf97-af5335394b6f" path="/var/lib/kubelet/pods/272010f5-1745-47b9-bf97-af5335394b6f/volumes" Oct 06 12:12:07 crc kubenswrapper[4698]: I1006 12:12:07.349126 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47bbda08-fa2d-4bad-af25-163aabf96973" path="/var/lib/kubelet/pods/47bbda08-fa2d-4bad-af25-163aabf96973/volumes" Oct 06 12:12:07 crc kubenswrapper[4698]: I1006 12:12:07.350337 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527d6d0f-30f7-4a96-866f-8392b12057b3" path="/var/lib/kubelet/pods/527d6d0f-30f7-4a96-866f-8392b12057b3/volumes" Oct 06 12:12:09 crc kubenswrapper[4698]: I1006 12:12:09.043032 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-2c05-account-create-drzlg"] Oct 06 12:12:09 crc kubenswrapper[4698]: I1006 12:12:09.055703 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-2c05-account-create-drzlg"] Oct 06 12:12:09 crc kubenswrapper[4698]: I1006 12:12:09.349439 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a95ffd-2442-4483-990b-5d80a2f84ec2" path="/var/lib/kubelet/pods/45a95ffd-2442-4483-990b-5d80a2f84ec2/volumes" Oct 06 12:12:18 crc kubenswrapper[4698]: I1006 12:12:18.328662 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:12:18 crc kubenswrapper[4698]: E1006 12:12:18.330856 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:12:25 crc kubenswrapper[4698]: I1006 12:12:25.093035 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9grl8"] Oct 06 12:12:25 crc kubenswrapper[4698]: I1006 12:12:25.105203 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mx92t"] Oct 06 12:12:25 crc kubenswrapper[4698]: I1006 12:12:25.122136 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-spzn5"] Oct 06 12:12:25 crc kubenswrapper[4698]: I1006 12:12:25.142436 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9grl8"] Oct 06 12:12:25 crc kubenswrapper[4698]: I1006 12:12:25.142494 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mx92t"] Oct 06 12:12:25 crc kubenswrapper[4698]: I1006 12:12:25.146721 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-spzn5"] Oct 06 12:12:25 crc kubenswrapper[4698]: I1006 12:12:25.341789 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8adfa5-ff54-4436-ae26-3a1723c0692d" path="/var/lib/kubelet/pods/2d8adfa5-ff54-4436-ae26-3a1723c0692d/volumes" Oct 06 12:12:25 crc kubenswrapper[4698]: I1006 12:12:25.342381 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c" path="/var/lib/kubelet/pods/83f4f2c6-4690-44cc-9fa1-0c6ccce1f95c/volumes" Oct 06 12:12:25 crc kubenswrapper[4698]: I1006 12:12:25.342979 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca60d6f-56ad-4cc4-971d-458cd6f5aad0" path="/var/lib/kubelet/pods/8ca60d6f-56ad-4cc4-971d-458cd6f5aad0/volumes" Oct 06 12:12:29 crc kubenswrapper[4698]: I1006 12:12:29.330695 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:12:29 crc kubenswrapper[4698]: E1006 12:12:29.332054 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.287855 4698 scope.go:117] "RemoveContainer" containerID="8d4b2b31260bb8b1019ed2aea6f60b0fecc6b87181a639b24ee1baa6775a6df6" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.337686 4698 scope.go:117] "RemoveContainer" containerID="6576b43a4b5269b3be0e29452340a2f2971604596dec66aa7055d831183b3ffd" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.417495 4698 scope.go:117] "RemoveContainer" containerID="907b6f96df04aabdd06329c8ee0d53069e95a66e02272a176afacab0a4fd41d9" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.441690 4698 scope.go:117] "RemoveContainer" containerID="c309372a9dc3bd50d5a28e6f00a67596b79e790d3224eed8219074c412f8a55e" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.498617 4698 scope.go:117] "RemoveContainer" containerID="c0e39504d16a559d57fa54eded68236b728159b49e35d6cd36577ea8d4d7f134" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.541411 4698 scope.go:117] "RemoveContainer" containerID="f592bcf9f164e82cc76d5bfd5d50cd07c671bd9bd9d315e2c0015d7d58f9d385" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.605126 4698 scope.go:117] "RemoveContainer" containerID="3297048c281d173a3252495013432d934a4073922e7765fb39b83229d979c48c" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.654333 4698 scope.go:117] "RemoveContainer" containerID="8854bb3a49990269550c6604d1b7d73b8705440bec9e2cd770a8ea77c3f96b70" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.687260 4698 scope.go:117] "RemoveContainer" containerID="8cbe03762441a8cb9763254a3c018fd44a860251a8e5060154c1775e5fe995aa" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.728319 4698 scope.go:117] "RemoveContainer" containerID="4fd9c9bc2b5bd4a512899c1fc9b663e0e9994e2ba4714ff82cdcde849387a3de" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.757462 4698 scope.go:117] "RemoveContainer" containerID="6e85a826cde780433967a0735e27d3decc5dae7fa19da9347ce667974d9c3fcd" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.786604 4698 scope.go:117] "RemoveContainer" containerID="bc3a819b1ed1775cc93380a97ca0a14e7fdf5dc353a2c047e337fc9b5d122981" Oct 06 12:12:40 crc kubenswrapper[4698]: I1006 12:12:40.807886 4698 scope.go:117] "RemoveContainer" containerID="7f8976f7d80858b1c3ac2ec52cc4d521b9b053a6deccba6a870bc8f415e5d943" Oct 06 12:12:42 crc kubenswrapper[4698]: I1006 12:12:42.058385 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-645e-account-create-kb64v"] Oct 06 12:12:42 crc kubenswrapper[4698]: I1006 12:12:42.068684 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-645e-account-create-kb64v"] Oct 06 12:12:43 crc kubenswrapper[4698]: I1006 12:12:43.048132 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3084-account-create-wn4b7"] Oct 06 12:12:43 crc kubenswrapper[4698]: I1006 12:12:43.064828 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-55ab-account-create-flpff"] Oct 06 12:12:43 crc kubenswrapper[4698]: I1006 12:12:43.078140 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-55ab-account-create-flpff"] Oct 06 12:12:43 crc kubenswrapper[4698]: I1006 12:12:43.089438 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3084-account-create-wn4b7"] Oct 06 12:12:43 crc kubenswrapper[4698]: I1006 12:12:43.339945 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:12:43 crc kubenswrapper[4698]: E1006 12:12:43.340977 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:12:43 crc kubenswrapper[4698]: I1006 12:12:43.342688 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd003f6-4f1f-417c-b4cf-412d9c06cb3c" path="/var/lib/kubelet/pods/6cd003f6-4f1f-417c-b4cf-412d9c06cb3c/volumes" Oct 06 12:12:43 crc kubenswrapper[4698]: I1006 12:12:43.343303 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d39ea82-8ea0-4df9-97ba-10ae91856a58" path="/var/lib/kubelet/pods/6d39ea82-8ea0-4df9-97ba-10ae91856a58/volumes" Oct 06 12:12:43 crc kubenswrapper[4698]: I1006 12:12:43.344033 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0c1e41-ba16-4e0b-968d-96f9cf129d89" path="/var/lib/kubelet/pods/ad0c1e41-ba16-4e0b-968d-96f9cf129d89/volumes" Oct 06 12:12:53 crc kubenswrapper[4698]: I1006 12:12:53.049682 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-4xb2s"] Oct 06 12:12:53 crc kubenswrapper[4698]: I1006 12:12:53.067549 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-4xb2s"] Oct 06 12:12:53 crc kubenswrapper[4698]: I1006 12:12:53.351897 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c922a4e8-475c-438a-88d9-8d33f597fda6" path="/var/lib/kubelet/pods/c922a4e8-475c-438a-88d9-8d33f597fda6/volumes" Oct 06 12:12:55 crc kubenswrapper[4698]: I1006 12:12:55.332613 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:12:55 crc kubenswrapper[4698]: E1006 12:12:55.332978 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:12:58 crc kubenswrapper[4698]: I1006 12:12:58.054490 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-cmcgd"] Oct 06 12:12:58 crc kubenswrapper[4698]: I1006 12:12:58.070937 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-cmcgd"] Oct 06 12:12:59 crc kubenswrapper[4698]: I1006 12:12:59.354664 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a181ed38-72eb-491d-b195-c52e4167bac6" path="/var/lib/kubelet/pods/a181ed38-72eb-491d-b195-c52e4167bac6/volumes" Oct 06 12:13:00 crc kubenswrapper[4698]: I1006 12:13:00.036194 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-r2jp7"] Oct 06 12:13:00 crc kubenswrapper[4698]: I1006 12:13:00.043479 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-r2jp7"] Oct 06 12:13:01 crc kubenswrapper[4698]: I1006 12:13:01.376658 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c6365d-fa8b-4f4e-9683-e021e05882ff" path="/var/lib/kubelet/pods/95c6365d-fa8b-4f4e-9683-e021e05882ff/volumes" Oct 06 12:13:06 crc kubenswrapper[4698]: I1006 12:13:06.330172 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:13:06 crc kubenswrapper[4698]: E1006 12:13:06.331221 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:13:21 crc kubenswrapper[4698]: I1006 12:13:21.330845 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:13:21 crc kubenswrapper[4698]: E1006 12:13:21.332059 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:13:23 crc kubenswrapper[4698]: I1006 12:13:23.943719 4698 generic.go:334] "Generic (PLEG): container finished" podID="cb95c9b2-ec91-415c-851c-1d10cd61f0f4" containerID="d819a2e7d254a141cc5e09db5d03e2739c151a709592fa4dc2a8e8e12aeffe75" exitCode=0 Oct 06 12:13:23 crc kubenswrapper[4698]: I1006 12:13:23.943856 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" event={"ID":"cb95c9b2-ec91-415c-851c-1d10cd61f0f4","Type":"ContainerDied","Data":"d819a2e7d254a141cc5e09db5d03e2739c151a709592fa4dc2a8e8e12aeffe75"} Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.597915 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.718756 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-inventory\") pod \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.718956 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-ssh-key\") pod \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.719004 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5s8h\" (UniqueName: \"kubernetes.io/projected/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-kube-api-access-w5s8h\") pod \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\" (UID: \"cb95c9b2-ec91-415c-851c-1d10cd61f0f4\") " Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.729552 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-kube-api-access-w5s8h" (OuterVolumeSpecName: "kube-api-access-w5s8h") pod "cb95c9b2-ec91-415c-851c-1d10cd61f0f4" (UID: "cb95c9b2-ec91-415c-851c-1d10cd61f0f4"). InnerVolumeSpecName "kube-api-access-w5s8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.765974 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cb95c9b2-ec91-415c-851c-1d10cd61f0f4" (UID: "cb95c9b2-ec91-415c-851c-1d10cd61f0f4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.790467 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-inventory" (OuterVolumeSpecName: "inventory") pod "cb95c9b2-ec91-415c-851c-1d10cd61f0f4" (UID: "cb95c9b2-ec91-415c-851c-1d10cd61f0f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.823941 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.824002 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5s8h\" (UniqueName: \"kubernetes.io/projected/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-kube-api-access-w5s8h\") on node \"crc\" DevicePath \"\"" Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.824238 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb95c9b2-ec91-415c-851c-1d10cd61f0f4-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.983544 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" event={"ID":"cb95c9b2-ec91-415c-851c-1d10cd61f0f4","Type":"ContainerDied","Data":"bc1c3b83fe27d3352f91b6a2c0e61d11db326265a160629bd3b6efc02445deea"} Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.983591 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc1c3b83fe27d3352f91b6a2c0e61d11db326265a160629bd3b6efc02445deea" Oct 06 12:13:25 crc kubenswrapper[4698]: I1006 12:13:25.983719 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.094895 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln"] Oct 06 12:13:26 crc kubenswrapper[4698]: E1006 12:13:26.096046 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb95c9b2-ec91-415c-851c-1d10cd61f0f4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.096072 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb95c9b2-ec91-415c-851c-1d10cd61f0f4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.096287 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb95c9b2-ec91-415c-851c-1d10cd61f0f4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.097133 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.102736 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.103004 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.103875 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.104186 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.110837 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln"] Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.135833 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ltjln\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.135932 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4754\" (UniqueName: \"kubernetes.io/projected/f084d261-7f67-4be1-83b2-7e1c379e0ffe-kube-api-access-z4754\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ltjln\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.136293 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ltjln\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.238854 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ltjln\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.239135 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ltjln\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.239180 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4754\" (UniqueName: \"kubernetes.io/projected/f084d261-7f67-4be1-83b2-7e1c379e0ffe-kube-api-access-z4754\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ltjln\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.245839 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ltjln\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.250484 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ltjln\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.277322 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4754\" (UniqueName: \"kubernetes.io/projected/f084d261-7f67-4be1-83b2-7e1c379e0ffe-kube-api-access-z4754\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ltjln\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:26 crc kubenswrapper[4698]: I1006 12:13:26.428762 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:13:27 crc kubenswrapper[4698]: I1006 12:13:27.102245 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln"] Oct 06 12:13:28 crc kubenswrapper[4698]: I1006 12:13:28.013854 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" event={"ID":"f084d261-7f67-4be1-83b2-7e1c379e0ffe","Type":"ContainerStarted","Data":"f10c7e12514ffa62b28067dd0fbc5d21b2be2cfca216259b49a09d5916b60861"} Oct 06 12:13:28 crc kubenswrapper[4698]: I1006 12:13:28.014377 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" event={"ID":"f084d261-7f67-4be1-83b2-7e1c379e0ffe","Type":"ContainerStarted","Data":"bcf0e7b6cf101b1217bc45db87a50b9d82bb82c0242ad165f9ca1923c837fcdf"} Oct 06 12:13:28 crc kubenswrapper[4698]: I1006 12:13:28.045165 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" podStartSLOduration=1.5387743170000001 podStartE2EDuration="2.045132995s" podCreationTimestamp="2025-10-06 12:13:26 +0000 UTC" firstStartedPulling="2025-10-06 12:13:27.111806899 +0000 UTC m=+1694.524499082" lastFinishedPulling="2025-10-06 12:13:27.618165547 +0000 UTC m=+1695.030857760" observedRunningTime="2025-10-06 12:13:28.033059969 +0000 UTC m=+1695.445752142" watchObservedRunningTime="2025-10-06 12:13:28.045132995 +0000 UTC m=+1695.457825168" Oct 06 12:13:35 crc kubenswrapper[4698]: I1006 12:13:35.064290 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-5xxbt"] Oct 06 12:13:35 crc kubenswrapper[4698]: I1006 12:13:35.081635 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-5xxbt"] Oct 06 12:13:35 crc kubenswrapper[4698]: I1006 12:13:35.361501 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c03557f-4b1f-4104-87a7-4a5880180c86" path="/var/lib/kubelet/pods/2c03557f-4b1f-4104-87a7-4a5880180c86/volumes" Oct 06 12:13:36 crc kubenswrapper[4698]: I1006 12:13:36.329803 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:13:36 crc kubenswrapper[4698]: E1006 12:13:36.330203 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:13:40 crc kubenswrapper[4698]: I1006 12:13:40.048395 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7pkpz"] Oct 06 12:13:40 crc kubenswrapper[4698]: I1006 12:13:40.064570 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7pkpz"] Oct 06 12:13:41 crc kubenswrapper[4698]: I1006 12:13:41.133519 4698 scope.go:117] "RemoveContainer" containerID="3120db0d68e02bb3d6653182209124dd8a0e844037126303e7d0e9429f65bc62" Oct 06 12:13:41 crc kubenswrapper[4698]: I1006 12:13:41.192310 4698 scope.go:117] "RemoveContainer" containerID="69ea6df088b137f18802533343d7250a925e3061cf765b6b4ed9f830cbb31b86" Oct 06 12:13:41 crc kubenswrapper[4698]: I1006 12:13:41.286059 4698 scope.go:117] "RemoveContainer" containerID="0cab26195ffb38d052537d2975fa777d5eff7334af27328416bb13ceff20f85c" Oct 06 12:13:41 crc kubenswrapper[4698]: I1006 12:13:41.333732 4698 scope.go:117] "RemoveContainer" containerID="07b05c7b787ed15a0f44335b8a40c29369540e885d5f29b7d908f36778e6a8d2" Oct 06 12:13:41 crc kubenswrapper[4698]: I1006 12:13:41.366449 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828a84bc-95cc-448b-a84c-4ca894dd754b" path="/var/lib/kubelet/pods/828a84bc-95cc-448b-a84c-4ca894dd754b/volumes" Oct 06 12:13:41 crc kubenswrapper[4698]: I1006 12:13:41.426145 4698 scope.go:117] "RemoveContainer" containerID="dd3aa85021520da22b0431a1a148ea4d5be003cde0aa3869286d92941fa57a85" Oct 06 12:13:41 crc kubenswrapper[4698]: I1006 12:13:41.458445 4698 scope.go:117] "RemoveContainer" containerID="c9f831d0a6c1b81f49262d9013b3cde99c1d741c216502412296f3bee4937dee" Oct 06 12:13:41 crc kubenswrapper[4698]: I1006 12:13:41.493500 4698 scope.go:117] "RemoveContainer" containerID="9423c86be3b50ee413bf6ba5829f1997f4559bb26ebc2f0b432405c4f466ab7e" Oct 06 12:13:44 crc kubenswrapper[4698]: I1006 12:13:44.044313 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-t47dq"] Oct 06 12:13:44 crc kubenswrapper[4698]: I1006 12:13:44.064063 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-t47dq"] Oct 06 12:13:45 crc kubenswrapper[4698]: I1006 12:13:45.368421 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7c054b-7ade-402e-b389-d07ced69c957" path="/var/lib/kubelet/pods/1d7c054b-7ade-402e-b389-d07ced69c957/volumes" Oct 06 12:13:50 crc kubenswrapper[4698]: I1006 12:13:50.330010 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:13:50 crc kubenswrapper[4698]: E1006 12:13:50.331480 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:13:58 crc kubenswrapper[4698]: I1006 12:13:58.052114 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-nc7nk"] Oct 06 12:13:58 crc kubenswrapper[4698]: I1006 12:13:58.077123 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-nc7nk"] Oct 06 12:13:59 crc kubenswrapper[4698]: I1006 12:13:59.343848 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1bd773-04e5-4524-a48e-b7a65c983a89" path="/var/lib/kubelet/pods/df1bd773-04e5-4524-a48e-b7a65c983a89/volumes" Oct 06 12:14:01 crc kubenswrapper[4698]: I1006 12:14:01.070946 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-l9vdh"] Oct 06 12:14:01 crc kubenswrapper[4698]: I1006 12:14:01.084770 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-l9vdh"] Oct 06 12:14:01 crc kubenswrapper[4698]: I1006 12:14:01.346785 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d620584e-f9cd-432a-9f55-9aa1f1056766" path="/var/lib/kubelet/pods/d620584e-f9cd-432a-9f55-9aa1f1056766/volumes" Oct 06 12:14:05 crc kubenswrapper[4698]: I1006 12:14:05.331729 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:14:05 crc kubenswrapper[4698]: E1006 12:14:05.332961 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:14:17 crc kubenswrapper[4698]: I1006 12:14:17.329646 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:14:17 crc kubenswrapper[4698]: E1006 12:14:17.331287 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:14:31 crc kubenswrapper[4698]: I1006 12:14:31.329688 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:14:31 crc kubenswrapper[4698]: E1006 12:14:31.331430 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:14:38 crc kubenswrapper[4698]: I1006 12:14:38.064976 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rqmcv"] Oct 06 12:14:38 crc kubenswrapper[4698]: I1006 12:14:38.086166 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-zn9rc"] Oct 06 12:14:38 crc kubenswrapper[4698]: I1006 12:14:38.107564 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dmtrj"] Oct 06 12:14:38 crc kubenswrapper[4698]: I1006 12:14:38.120420 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rqmcv"] Oct 06 12:14:38 crc kubenswrapper[4698]: I1006 12:14:38.130730 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-zn9rc"] Oct 06 12:14:38 crc kubenswrapper[4698]: I1006 12:14:38.141250 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dmtrj"] Oct 06 12:14:39 crc kubenswrapper[4698]: I1006 12:14:39.353127 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00458aeb-06de-40c3-aa85-2c88c9cb4229" path="/var/lib/kubelet/pods/00458aeb-06de-40c3-aa85-2c88c9cb4229/volumes" Oct 06 12:14:39 crc kubenswrapper[4698]: I1006 12:14:39.354941 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900" path="/var/lib/kubelet/pods/9d7fcc5b-e120-42f4-b6ab-f4cfb0dac900/volumes" Oct 06 12:14:39 crc kubenswrapper[4698]: I1006 12:14:39.355680 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7456d1f-e120-4aa6-bfcc-720c02e5a645" path="/var/lib/kubelet/pods/d7456d1f-e120-4aa6-bfcc-720c02e5a645/volumes" Oct 06 12:14:41 crc kubenswrapper[4698]: I1006 12:14:41.695227 4698 scope.go:117] "RemoveContainer" containerID="58e4c201ff0557e866925c0ebc2dd3a242f3286eced63b5617823b5bd0cceaf6" Oct 06 12:14:41 crc kubenswrapper[4698]: I1006 12:14:41.770260 4698 scope.go:117] "RemoveContainer" containerID="7a2140df0254cf85694203519aa0a667a574788f23604673caefc8f99e920186" Oct 06 12:14:41 crc kubenswrapper[4698]: I1006 12:14:41.813934 4698 scope.go:117] "RemoveContainer" containerID="fc8a876ec55afbcc25b8f1748c61fe85dc24811e1d5b8936e22f28f491fb4110" Oct 06 12:14:41 crc kubenswrapper[4698]: I1006 12:14:41.851577 4698 scope.go:117] "RemoveContainer" containerID="ab3c038cfeb76cf8ab1c4eaf406ac4c2175f38a1e89181c54bfd90c4801d7302" Oct 06 12:14:41 crc kubenswrapper[4698]: I1006 12:14:41.926207 4698 scope.go:117] "RemoveContainer" containerID="e15ede615d1039c28da6e28604566807ffb0d03e5e8a810496e36f004b855fa7" Oct 06 12:14:41 crc kubenswrapper[4698]: I1006 12:14:41.981051 4698 scope.go:117] "RemoveContainer" containerID="0067d8c56101b5252796f649d4ef33e2b005c0352bb330a242919c239efed96e" Oct 06 12:14:42 crc kubenswrapper[4698]: I1006 12:14:42.014147 4698 scope.go:117] "RemoveContainer" containerID="ec84410d5c45d2cb4e65d091fa3dea56bf42f509d9ca919f1655ed5b16a3c069" Oct 06 12:14:44 crc kubenswrapper[4698]: I1006 12:14:44.047693 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1742-account-create-d49wh"] Oct 06 12:14:44 crc kubenswrapper[4698]: I1006 12:14:44.060458 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1742-account-create-d49wh"] Oct 06 12:14:45 crc kubenswrapper[4698]: I1006 12:14:45.330506 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:14:45 crc kubenswrapper[4698]: E1006 12:14:45.331948 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:14:45 crc kubenswrapper[4698]: I1006 12:14:45.353163 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe1856c-014a-49a4-b3df-586640603de9" path="/var/lib/kubelet/pods/2fe1856c-014a-49a4-b3df-586640603de9/volumes" Oct 06 12:14:54 crc kubenswrapper[4698]: I1006 12:14:54.050377 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5041-account-create-f2wjp"] Oct 06 12:14:54 crc kubenswrapper[4698]: I1006 12:14:54.073744 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ff0e-account-create-mvz4n"] Oct 06 12:14:54 crc kubenswrapper[4698]: I1006 12:14:54.088732 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5041-account-create-f2wjp"] Oct 06 12:14:54 crc kubenswrapper[4698]: I1006 12:14:54.101936 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ff0e-account-create-mvz4n"] Oct 06 12:14:54 crc kubenswrapper[4698]: I1006 12:14:54.236805 4698 generic.go:334] "Generic (PLEG): container finished" podID="f084d261-7f67-4be1-83b2-7e1c379e0ffe" containerID="f10c7e12514ffa62b28067dd0fbc5d21b2be2cfca216259b49a09d5916b60861" exitCode=0 Oct 06 12:14:54 crc kubenswrapper[4698]: I1006 12:14:54.236892 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" event={"ID":"f084d261-7f67-4be1-83b2-7e1c379e0ffe","Type":"ContainerDied","Data":"f10c7e12514ffa62b28067dd0fbc5d21b2be2cfca216259b49a09d5916b60861"} Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.348268 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f9e125-7190-4895-ac10-94ad7e66fea2" path="/var/lib/kubelet/pods/87f9e125-7190-4895-ac10-94ad7e66fea2/volumes" Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.349475 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a6d350-1567-4fda-bb0b-f7091ccf8bbc" path="/var/lib/kubelet/pods/f9a6d350-1567-4fda-bb0b-f7091ccf8bbc/volumes" Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.792182 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.871271 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-inventory\") pod \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.871466 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-ssh-key\") pod \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.871607 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4754\" (UniqueName: \"kubernetes.io/projected/f084d261-7f67-4be1-83b2-7e1c379e0ffe-kube-api-access-z4754\") pod \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\" (UID: \"f084d261-7f67-4be1-83b2-7e1c379e0ffe\") " Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.878979 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f084d261-7f67-4be1-83b2-7e1c379e0ffe-kube-api-access-z4754" (OuterVolumeSpecName: "kube-api-access-z4754") pod "f084d261-7f67-4be1-83b2-7e1c379e0ffe" (UID: "f084d261-7f67-4be1-83b2-7e1c379e0ffe"). InnerVolumeSpecName "kube-api-access-z4754". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.916366 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-inventory" (OuterVolumeSpecName: "inventory") pod "f084d261-7f67-4be1-83b2-7e1c379e0ffe" (UID: "f084d261-7f67-4be1-83b2-7e1c379e0ffe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.932273 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f084d261-7f67-4be1-83b2-7e1c379e0ffe" (UID: "f084d261-7f67-4be1-83b2-7e1c379e0ffe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.976508 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.976559 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4754\" (UniqueName: \"kubernetes.io/projected/f084d261-7f67-4be1-83b2-7e1c379e0ffe-kube-api-access-z4754\") on node \"crc\" DevicePath \"\"" Oct 06 12:14:55 crc kubenswrapper[4698]: I1006 12:14:55.976572 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f084d261-7f67-4be1-83b2-7e1c379e0ffe-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.304702 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" event={"ID":"f084d261-7f67-4be1-83b2-7e1c379e0ffe","Type":"ContainerDied","Data":"bcf0e7b6cf101b1217bc45db87a50b9d82bb82c0242ad165f9ca1923c837fcdf"} Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.305246 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcf0e7b6cf101b1217bc45db87a50b9d82bb82c0242ad165f9ca1923c837fcdf" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.305352 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ltjln" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.329052 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:14:56 crc kubenswrapper[4698]: E1006 12:14:56.329477 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.379879 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc"] Oct 06 12:14:56 crc kubenswrapper[4698]: E1006 12:14:56.380452 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f084d261-7f67-4be1-83b2-7e1c379e0ffe" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.380467 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f084d261-7f67-4be1-83b2-7e1c379e0ffe" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.380696 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f084d261-7f67-4be1-83b2-7e1c379e0ffe" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.381530 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.386804 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q67vc\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.386919 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz5mn\" (UniqueName: \"kubernetes.io/projected/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-kube-api-access-bz5mn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q67vc\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.386994 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q67vc\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.389465 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.390124 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.390987 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.391375 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.394359 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc"] Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.489568 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q67vc\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.489657 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz5mn\" (UniqueName: \"kubernetes.io/projected/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-kube-api-access-bz5mn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q67vc\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.489716 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q67vc\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.496984 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q67vc\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.497633 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q67vc\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.534314 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz5mn\" (UniqueName: \"kubernetes.io/projected/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-kube-api-access-bz5mn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q67vc\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:56 crc kubenswrapper[4698]: I1006 12:14:56.700914 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:14:57 crc kubenswrapper[4698]: I1006 12:14:57.407915 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc"] Oct 06 12:14:58 crc kubenswrapper[4698]: I1006 12:14:58.331664 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" event={"ID":"ecc55e3d-ca7e-41de-9f19-fb1b2857d398","Type":"ContainerStarted","Data":"c82664fc3c55b003c0de760f4d72491bd07cc431b46e77a033eed62490c3c6d2"} Oct 06 12:14:58 crc kubenswrapper[4698]: I1006 12:14:58.332646 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" event={"ID":"ecc55e3d-ca7e-41de-9f19-fb1b2857d398","Type":"ContainerStarted","Data":"312c5b5c268e8b409e9acb65806a9a93579867bb85a096f58d12e7e83d2c3671"} Oct 06 12:14:58 crc kubenswrapper[4698]: I1006 12:14:58.366152 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" podStartSLOduration=1.9449190280000002 podStartE2EDuration="2.366111284s" podCreationTimestamp="2025-10-06 12:14:56 +0000 UTC" firstStartedPulling="2025-10-06 12:14:57.420900728 +0000 UTC m=+1784.833592941" lastFinishedPulling="2025-10-06 12:14:57.842093024 +0000 UTC m=+1785.254785197" observedRunningTime="2025-10-06 12:14:58.363413388 +0000 UTC m=+1785.776105611" watchObservedRunningTime="2025-10-06 12:14:58.366111284 +0000 UTC m=+1785.778803497" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.184964 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f"] Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.188608 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.197237 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f"] Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.198002 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.199810 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.281331 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/316e3a91-b7c6-468d-aff2-fef1ed882113-secret-volume\") pod \"collect-profiles-29329215-5xm5f\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.281507 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7zlp\" (UniqueName: \"kubernetes.io/projected/316e3a91-b7c6-468d-aff2-fef1ed882113-kube-api-access-g7zlp\") pod \"collect-profiles-29329215-5xm5f\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.281652 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/316e3a91-b7c6-468d-aff2-fef1ed882113-config-volume\") pod \"collect-profiles-29329215-5xm5f\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.385879 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7zlp\" (UniqueName: \"kubernetes.io/projected/316e3a91-b7c6-468d-aff2-fef1ed882113-kube-api-access-g7zlp\") pod \"collect-profiles-29329215-5xm5f\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.386081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/316e3a91-b7c6-468d-aff2-fef1ed882113-config-volume\") pod \"collect-profiles-29329215-5xm5f\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.386386 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/316e3a91-b7c6-468d-aff2-fef1ed882113-secret-volume\") pod \"collect-profiles-29329215-5xm5f\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.387446 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/316e3a91-b7c6-468d-aff2-fef1ed882113-config-volume\") pod \"collect-profiles-29329215-5xm5f\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.398427 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/316e3a91-b7c6-468d-aff2-fef1ed882113-secret-volume\") pod \"collect-profiles-29329215-5xm5f\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.416437 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7zlp\" (UniqueName: \"kubernetes.io/projected/316e3a91-b7c6-468d-aff2-fef1ed882113-kube-api-access-g7zlp\") pod \"collect-profiles-29329215-5xm5f\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:00 crc kubenswrapper[4698]: I1006 12:15:00.511882 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:01 crc kubenswrapper[4698]: I1006 12:15:01.067392 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f"] Oct 06 12:15:01 crc kubenswrapper[4698]: I1006 12:15:01.377121 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" event={"ID":"316e3a91-b7c6-468d-aff2-fef1ed882113","Type":"ContainerStarted","Data":"54047dedbd345b5fad8b9caa1736340b250f5fdb00dae6412586052585c79c1a"} Oct 06 12:15:01 crc kubenswrapper[4698]: I1006 12:15:01.377633 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" event={"ID":"316e3a91-b7c6-468d-aff2-fef1ed882113","Type":"ContainerStarted","Data":"c50fda2b029f60c6ea07df5a6027a16baec3b636b709c4f9eb61d89f9e2d37d1"} Oct 06 12:15:01 crc kubenswrapper[4698]: I1006 12:15:01.414854 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" podStartSLOduration=1.4148344609999999 podStartE2EDuration="1.414834461s" podCreationTimestamp="2025-10-06 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:15:01.406764384 +0000 UTC m=+1788.819456567" watchObservedRunningTime="2025-10-06 12:15:01.414834461 +0000 UTC m=+1788.827526634" Oct 06 12:15:02 crc kubenswrapper[4698]: I1006 12:15:02.391575 4698 generic.go:334] "Generic (PLEG): container finished" podID="316e3a91-b7c6-468d-aff2-fef1ed882113" containerID="54047dedbd345b5fad8b9caa1736340b250f5fdb00dae6412586052585c79c1a" exitCode=0 Oct 06 12:15:02 crc kubenswrapper[4698]: I1006 12:15:02.391693 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" event={"ID":"316e3a91-b7c6-468d-aff2-fef1ed882113","Type":"ContainerDied","Data":"54047dedbd345b5fad8b9caa1736340b250f5fdb00dae6412586052585c79c1a"} Oct 06 12:15:03 crc kubenswrapper[4698]: I1006 12:15:03.804929 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:03 crc kubenswrapper[4698]: I1006 12:15:03.976528 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/316e3a91-b7c6-468d-aff2-fef1ed882113-secret-volume\") pod \"316e3a91-b7c6-468d-aff2-fef1ed882113\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " Oct 06 12:15:03 crc kubenswrapper[4698]: I1006 12:15:03.976784 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7zlp\" (UniqueName: \"kubernetes.io/projected/316e3a91-b7c6-468d-aff2-fef1ed882113-kube-api-access-g7zlp\") pod \"316e3a91-b7c6-468d-aff2-fef1ed882113\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " Oct 06 12:15:03 crc kubenswrapper[4698]: I1006 12:15:03.977565 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/316e3a91-b7c6-468d-aff2-fef1ed882113-config-volume\") pod \"316e3a91-b7c6-468d-aff2-fef1ed882113\" (UID: \"316e3a91-b7c6-468d-aff2-fef1ed882113\") " Oct 06 12:15:03 crc kubenswrapper[4698]: I1006 12:15:03.978706 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/316e3a91-b7c6-468d-aff2-fef1ed882113-config-volume" (OuterVolumeSpecName: "config-volume") pod "316e3a91-b7c6-468d-aff2-fef1ed882113" (UID: "316e3a91-b7c6-468d-aff2-fef1ed882113"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:15:03 crc kubenswrapper[4698]: I1006 12:15:03.987689 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316e3a91-b7c6-468d-aff2-fef1ed882113-kube-api-access-g7zlp" (OuterVolumeSpecName: "kube-api-access-g7zlp") pod "316e3a91-b7c6-468d-aff2-fef1ed882113" (UID: "316e3a91-b7c6-468d-aff2-fef1ed882113"). InnerVolumeSpecName "kube-api-access-g7zlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:15:03 crc kubenswrapper[4698]: I1006 12:15:03.987891 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316e3a91-b7c6-468d-aff2-fef1ed882113-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "316e3a91-b7c6-468d-aff2-fef1ed882113" (UID: "316e3a91-b7c6-468d-aff2-fef1ed882113"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:15:04 crc kubenswrapper[4698]: I1006 12:15:04.081001 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/316e3a91-b7c6-468d-aff2-fef1ed882113-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:04 crc kubenswrapper[4698]: I1006 12:15:04.081141 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/316e3a91-b7c6-468d-aff2-fef1ed882113-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:04 crc kubenswrapper[4698]: I1006 12:15:04.081153 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7zlp\" (UniqueName: \"kubernetes.io/projected/316e3a91-b7c6-468d-aff2-fef1ed882113-kube-api-access-g7zlp\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:04 crc kubenswrapper[4698]: I1006 12:15:04.424362 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" event={"ID":"316e3a91-b7c6-468d-aff2-fef1ed882113","Type":"ContainerDied","Data":"c50fda2b029f60c6ea07df5a6027a16baec3b636b709c4f9eb61d89f9e2d37d1"} Oct 06 12:15:04 crc kubenswrapper[4698]: I1006 12:15:04.425085 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c50fda2b029f60c6ea07df5a6027a16baec3b636b709c4f9eb61d89f9e2d37d1" Oct 06 12:15:04 crc kubenswrapper[4698]: I1006 12:15:04.424421 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f" Oct 06 12:15:04 crc kubenswrapper[4698]: I1006 12:15:04.428716 4698 generic.go:334] "Generic (PLEG): container finished" podID="ecc55e3d-ca7e-41de-9f19-fb1b2857d398" containerID="c82664fc3c55b003c0de760f4d72491bd07cc431b46e77a033eed62490c3c6d2" exitCode=0 Oct 06 12:15:04 crc kubenswrapper[4698]: I1006 12:15:04.428785 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" event={"ID":"ecc55e3d-ca7e-41de-9f19-fb1b2857d398","Type":"ContainerDied","Data":"c82664fc3c55b003c0de760f4d72491bd07cc431b46e77a033eed62490c3c6d2"} Oct 06 12:15:05 crc kubenswrapper[4698]: I1006 12:15:05.924607 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.038549 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-inventory\") pod \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.038620 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-ssh-key\") pod \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.038875 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz5mn\" (UniqueName: \"kubernetes.io/projected/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-kube-api-access-bz5mn\") pod \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\" (UID: \"ecc55e3d-ca7e-41de-9f19-fb1b2857d398\") " Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.046403 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-kube-api-access-bz5mn" (OuterVolumeSpecName: "kube-api-access-bz5mn") pod "ecc55e3d-ca7e-41de-9f19-fb1b2857d398" (UID: "ecc55e3d-ca7e-41de-9f19-fb1b2857d398"). InnerVolumeSpecName "kube-api-access-bz5mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.072531 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ecc55e3d-ca7e-41de-9f19-fb1b2857d398" (UID: "ecc55e3d-ca7e-41de-9f19-fb1b2857d398"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.094734 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-inventory" (OuterVolumeSpecName: "inventory") pod "ecc55e3d-ca7e-41de-9f19-fb1b2857d398" (UID: "ecc55e3d-ca7e-41de-9f19-fb1b2857d398"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.142596 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.142660 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.142672 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz5mn\" (UniqueName: \"kubernetes.io/projected/ecc55e3d-ca7e-41de-9f19-fb1b2857d398-kube-api-access-bz5mn\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.459209 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" event={"ID":"ecc55e3d-ca7e-41de-9f19-fb1b2857d398","Type":"ContainerDied","Data":"312c5b5c268e8b409e9acb65806a9a93579867bb85a096f58d12e7e83d2c3671"} Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.459785 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="312c5b5c268e8b409e9acb65806a9a93579867bb85a096f58d12e7e83d2c3671" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.459274 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q67vc" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.579932 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t"] Oct 06 12:15:06 crc kubenswrapper[4698]: E1006 12:15:06.580674 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc55e3d-ca7e-41de-9f19-fb1b2857d398" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.580709 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc55e3d-ca7e-41de-9f19-fb1b2857d398" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:15:06 crc kubenswrapper[4698]: E1006 12:15:06.580749 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316e3a91-b7c6-468d-aff2-fef1ed882113" containerName="collect-profiles" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.580763 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="316e3a91-b7c6-468d-aff2-fef1ed882113" containerName="collect-profiles" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.581138 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc55e3d-ca7e-41de-9f19-fb1b2857d398" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.581182 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="316e3a91-b7c6-468d-aff2-fef1ed882113" containerName="collect-profiles" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.582474 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.586664 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.586885 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.587068 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.587418 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.599533 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t"] Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.764313 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8lz9t\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.764385 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpfzb\" (UniqueName: \"kubernetes.io/projected/fdcfa9c6-8380-471e-a9bb-1368772713a5-kube-api-access-lpfzb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8lz9t\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.764976 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8lz9t\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.867116 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8lz9t\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.867365 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8lz9t\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.867432 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpfzb\" (UniqueName: \"kubernetes.io/projected/fdcfa9c6-8380-471e-a9bb-1368772713a5-kube-api-access-lpfzb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8lz9t\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.876177 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8lz9t\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.877845 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8lz9t\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.890456 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpfzb\" (UniqueName: \"kubernetes.io/projected/fdcfa9c6-8380-471e-a9bb-1368772713a5-kube-api-access-lpfzb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8lz9t\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:06 crc kubenswrapper[4698]: I1006 12:15:06.910433 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:07 crc kubenswrapper[4698]: I1006 12:15:07.368331 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t"] Oct 06 12:15:07 crc kubenswrapper[4698]: I1006 12:15:07.377228 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:15:07 crc kubenswrapper[4698]: I1006 12:15:07.471148 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" event={"ID":"fdcfa9c6-8380-471e-a9bb-1368772713a5","Type":"ContainerStarted","Data":"713f7ca2bf50aa06f3a0dc86048fc0d5d95f977180c15aeca4c24b3b0c8911a3"} Oct 06 12:15:08 crc kubenswrapper[4698]: I1006 12:15:08.491330 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" event={"ID":"fdcfa9c6-8380-471e-a9bb-1368772713a5","Type":"ContainerStarted","Data":"df93784c65f9c3cefd4fbd730cb84c3be05b2b9e13cf3d89b3f1b6bb200f03a2"} Oct 06 12:15:08 crc kubenswrapper[4698]: I1006 12:15:08.553240 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" podStartSLOduration=1.8835566620000002 podStartE2EDuration="2.553204351s" podCreationTimestamp="2025-10-06 12:15:06 +0000 UTC" firstStartedPulling="2025-10-06 12:15:07.376923557 +0000 UTC m=+1794.789615740" lastFinishedPulling="2025-10-06 12:15:08.046571216 +0000 UTC m=+1795.459263429" observedRunningTime="2025-10-06 12:15:08.524168901 +0000 UTC m=+1795.936861114" watchObservedRunningTime="2025-10-06 12:15:08.553204351 +0000 UTC m=+1795.965896544" Oct 06 12:15:11 crc kubenswrapper[4698]: I1006 12:15:11.329669 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:15:11 crc kubenswrapper[4698]: E1006 12:15:11.330580 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:15:21 crc kubenswrapper[4698]: I1006 12:15:21.069455 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w87sz"] Oct 06 12:15:21 crc kubenswrapper[4698]: I1006 12:15:21.081099 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w87sz"] Oct 06 12:15:21 crc kubenswrapper[4698]: I1006 12:15:21.352900 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e437d39-eb38-4140-ad48-50740fb31ee4" path="/var/lib/kubelet/pods/8e437d39-eb38-4140-ad48-50740fb31ee4/volumes" Oct 06 12:15:25 crc kubenswrapper[4698]: I1006 12:15:25.329680 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:15:25 crc kubenswrapper[4698]: I1006 12:15:25.768144 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"44e1fed630ac0541961ac92a51ae0f5dee25c325829cdc321ce14730e5443530"} Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.157227 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xb8g5"] Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.162613 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.192193 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xb8g5"] Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.201465 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2drf\" (UniqueName: \"kubernetes.io/projected/ca4724d7-96d4-461f-a193-5af3518db60b-kube-api-access-n2drf\") pod \"redhat-marketplace-xb8g5\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.201590 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-utilities\") pod \"redhat-marketplace-xb8g5\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.201735 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-catalog-content\") pod \"redhat-marketplace-xb8g5\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.304352 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-utilities\") pod \"redhat-marketplace-xb8g5\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.304882 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-catalog-content\") pod \"redhat-marketplace-xb8g5\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.305082 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-utilities\") pod \"redhat-marketplace-xb8g5\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.305433 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-catalog-content\") pod \"redhat-marketplace-xb8g5\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.305698 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2drf\" (UniqueName: \"kubernetes.io/projected/ca4724d7-96d4-461f-a193-5af3518db60b-kube-api-access-n2drf\") pod \"redhat-marketplace-xb8g5\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.331652 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2drf\" (UniqueName: \"kubernetes.io/projected/ca4724d7-96d4-461f-a193-5af3518db60b-kube-api-access-n2drf\") pod \"redhat-marketplace-xb8g5\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:36 crc kubenswrapper[4698]: I1006 12:15:36.500548 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:37 crc kubenswrapper[4698]: I1006 12:15:37.084588 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xb8g5"] Oct 06 12:15:37 crc kubenswrapper[4698]: W1006 12:15:37.097234 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca4724d7_96d4_461f_a193_5af3518db60b.slice/crio-eeffa81c203baab6a2ca10e73ffa32291b84863790b564d798d4c4333376c61b WatchSource:0}: Error finding container eeffa81c203baab6a2ca10e73ffa32291b84863790b564d798d4c4333376c61b: Status 404 returned error can't find the container with id eeffa81c203baab6a2ca10e73ffa32291b84863790b564d798d4c4333376c61b Oct 06 12:15:37 crc kubenswrapper[4698]: I1006 12:15:37.923759 4698 generic.go:334] "Generic (PLEG): container finished" podID="ca4724d7-96d4-461f-a193-5af3518db60b" containerID="a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c" exitCode=0 Oct 06 12:15:37 crc kubenswrapper[4698]: I1006 12:15:37.923847 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb8g5" event={"ID":"ca4724d7-96d4-461f-a193-5af3518db60b","Type":"ContainerDied","Data":"a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c"} Oct 06 12:15:37 crc kubenswrapper[4698]: I1006 12:15:37.924262 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb8g5" event={"ID":"ca4724d7-96d4-461f-a193-5af3518db60b","Type":"ContainerStarted","Data":"eeffa81c203baab6a2ca10e73ffa32291b84863790b564d798d4c4333376c61b"} Oct 06 12:15:39 crc kubenswrapper[4698]: I1006 12:15:39.953183 4698 generic.go:334] "Generic (PLEG): container finished" podID="ca4724d7-96d4-461f-a193-5af3518db60b" containerID="b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd" exitCode=0 Oct 06 12:15:39 crc kubenswrapper[4698]: I1006 12:15:39.953340 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb8g5" event={"ID":"ca4724d7-96d4-461f-a193-5af3518db60b","Type":"ContainerDied","Data":"b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd"} Oct 06 12:15:40 crc kubenswrapper[4698]: I1006 12:15:40.970320 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb8g5" event={"ID":"ca4724d7-96d4-461f-a193-5af3518db60b","Type":"ContainerStarted","Data":"87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0"} Oct 06 12:15:41 crc kubenswrapper[4698]: I1006 12:15:41.007563 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xb8g5" podStartSLOduration=2.558883759 podStartE2EDuration="5.007532128s" podCreationTimestamp="2025-10-06 12:15:36 +0000 UTC" firstStartedPulling="2025-10-06 12:15:37.927728861 +0000 UTC m=+1825.340421034" lastFinishedPulling="2025-10-06 12:15:40.37637723 +0000 UTC m=+1827.789069403" observedRunningTime="2025-10-06 12:15:40.988942734 +0000 UTC m=+1828.401634917" watchObservedRunningTime="2025-10-06 12:15:41.007532128 +0000 UTC m=+1828.420224301" Oct 06 12:15:42 crc kubenswrapper[4698]: I1006 12:15:42.236272 4698 scope.go:117] "RemoveContainer" containerID="191d539d441bab1ee1e0404ed5e9802a09b8d57c2503f1b82d140a04a499c6e2" Oct 06 12:15:42 crc kubenswrapper[4698]: I1006 12:15:42.270438 4698 scope.go:117] "RemoveContainer" containerID="55f917659c6ec1d986c64bb9973cb8c27b97a1fd240079253fb1050b1d4f03e9" Oct 06 12:15:42 crc kubenswrapper[4698]: I1006 12:15:42.363839 4698 scope.go:117] "RemoveContainer" containerID="42de3eeaae5bb0142991acdb24c4a7049690d16b2c90435d1632671ff42bb7f1" Oct 06 12:15:42 crc kubenswrapper[4698]: I1006 12:15:42.391594 4698 scope.go:117] "RemoveContainer" containerID="965ab0daa78e3030ce59e3cc374ec6c5b2bc8745327fcb25ed70b656de5052a7" Oct 06 12:15:45 crc kubenswrapper[4698]: I1006 12:15:45.071248 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9hwq"] Oct 06 12:15:45 crc kubenswrapper[4698]: I1006 12:15:45.086763 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9hwq"] Oct 06 12:15:45 crc kubenswrapper[4698]: I1006 12:15:45.354518 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582e7285-37d7-483e-8196-6fbcfe1cc9ec" path="/var/lib/kubelet/pods/582e7285-37d7-483e-8196-6fbcfe1cc9ec/volumes" Oct 06 12:15:46 crc kubenswrapper[4698]: I1006 12:15:46.501581 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:46 crc kubenswrapper[4698]: I1006 12:15:46.501997 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:46 crc kubenswrapper[4698]: I1006 12:15:46.592087 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:47 crc kubenswrapper[4698]: I1006 12:15:47.046211 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zcd97"] Oct 06 12:15:47 crc kubenswrapper[4698]: I1006 12:15:47.062417 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zcd97"] Oct 06 12:15:47 crc kubenswrapper[4698]: I1006 12:15:47.118847 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:47 crc kubenswrapper[4698]: I1006 12:15:47.348982 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941bdc31-a448-4fce-910b-54eed75a1974" path="/var/lib/kubelet/pods/941bdc31-a448-4fce-910b-54eed75a1974/volumes" Oct 06 12:15:48 crc kubenswrapper[4698]: I1006 12:15:48.942261 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xb8g5"] Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.073928 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xb8g5" podUID="ca4724d7-96d4-461f-a193-5af3518db60b" containerName="registry-server" containerID="cri-o://87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0" gracePeriod=2 Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.596965 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.711483 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-utilities\") pod \"ca4724d7-96d4-461f-a193-5af3518db60b\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.712079 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2drf\" (UniqueName: \"kubernetes.io/projected/ca4724d7-96d4-461f-a193-5af3518db60b-kube-api-access-n2drf\") pod \"ca4724d7-96d4-461f-a193-5af3518db60b\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.712390 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-catalog-content\") pod \"ca4724d7-96d4-461f-a193-5af3518db60b\" (UID: \"ca4724d7-96d4-461f-a193-5af3518db60b\") " Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.712818 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-utilities" (OuterVolumeSpecName: "utilities") pod "ca4724d7-96d4-461f-a193-5af3518db60b" (UID: "ca4724d7-96d4-461f-a193-5af3518db60b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.714333 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.727053 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4724d7-96d4-461f-a193-5af3518db60b-kube-api-access-n2drf" (OuterVolumeSpecName: "kube-api-access-n2drf") pod "ca4724d7-96d4-461f-a193-5af3518db60b" (UID: "ca4724d7-96d4-461f-a193-5af3518db60b"). InnerVolumeSpecName "kube-api-access-n2drf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.729820 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca4724d7-96d4-461f-a193-5af3518db60b" (UID: "ca4724d7-96d4-461f-a193-5af3518db60b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.817584 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2drf\" (UniqueName: \"kubernetes.io/projected/ca4724d7-96d4-461f-a193-5af3518db60b-kube-api-access-n2drf\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:49 crc kubenswrapper[4698]: I1006 12:15:49.817906 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4724d7-96d4-461f-a193-5af3518db60b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.092129 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xb8g5" Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.092135 4698 generic.go:334] "Generic (PLEG): container finished" podID="ca4724d7-96d4-461f-a193-5af3518db60b" containerID="87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0" exitCode=0 Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.092184 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb8g5" event={"ID":"ca4724d7-96d4-461f-a193-5af3518db60b","Type":"ContainerDied","Data":"87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0"} Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.092289 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb8g5" event={"ID":"ca4724d7-96d4-461f-a193-5af3518db60b","Type":"ContainerDied","Data":"eeffa81c203baab6a2ca10e73ffa32291b84863790b564d798d4c4333376c61b"} Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.092332 4698 scope.go:117] "RemoveContainer" containerID="87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0" Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.138227 4698 scope.go:117] "RemoveContainer" containerID="b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd" Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.139676 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xb8g5"] Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.154726 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xb8g5"] Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.180823 4698 scope.go:117] "RemoveContainer" containerID="a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c" Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.227641 4698 scope.go:117] "RemoveContainer" containerID="87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0" Oct 06 12:15:50 crc kubenswrapper[4698]: E1006 12:15:50.228309 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0\": container with ID starting with 87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0 not found: ID does not exist" containerID="87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0" Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.228369 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0"} err="failed to get container status \"87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0\": rpc error: code = NotFound desc = could not find container \"87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0\": container with ID starting with 87261e8f041c775cb9290b52f6c556919d82842f74a1cc52881ee8f6f59a36d0 not found: ID does not exist" Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.228427 4698 scope.go:117] "RemoveContainer" containerID="b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd" Oct 06 12:15:50 crc kubenswrapper[4698]: E1006 12:15:50.229289 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd\": container with ID starting with b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd not found: ID does not exist" containerID="b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd" Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.229341 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd"} err="failed to get container status \"b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd\": rpc error: code = NotFound desc = could not find container \"b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd\": container with ID starting with b9bb38f0222b2189011fa7ededfbb26364062d9079bd04ddf424b045b3b49afd not found: ID does not exist" Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.229369 4698 scope.go:117] "RemoveContainer" containerID="a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c" Oct 06 12:15:50 crc kubenswrapper[4698]: E1006 12:15:50.229870 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c\": container with ID starting with a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c not found: ID does not exist" containerID="a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c" Oct 06 12:15:50 crc kubenswrapper[4698]: I1006 12:15:50.229929 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c"} err="failed to get container status \"a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c\": rpc error: code = NotFound desc = could not find container \"a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c\": container with ID starting with a6c4aa9da0212cce8f77a3ebd414e6bbb64419f57223d95a8be90eb3f046936c not found: ID does not exist" Oct 06 12:15:51 crc kubenswrapper[4698]: I1006 12:15:51.348779 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4724d7-96d4-461f-a193-5af3518db60b" path="/var/lib/kubelet/pods/ca4724d7-96d4-461f-a193-5af3518db60b/volumes" Oct 06 12:15:52 crc kubenswrapper[4698]: I1006 12:15:52.126129 4698 generic.go:334] "Generic (PLEG): container finished" podID="fdcfa9c6-8380-471e-a9bb-1368772713a5" containerID="df93784c65f9c3cefd4fbd730cb84c3be05b2b9e13cf3d89b3f1b6bb200f03a2" exitCode=0 Oct 06 12:15:52 crc kubenswrapper[4698]: I1006 12:15:52.126228 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" event={"ID":"fdcfa9c6-8380-471e-a9bb-1368772713a5","Type":"ContainerDied","Data":"df93784c65f9c3cefd4fbd730cb84c3be05b2b9e13cf3d89b3f1b6bb200f03a2"} Oct 06 12:15:53 crc kubenswrapper[4698]: I1006 12:15:53.650797 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:53 crc kubenswrapper[4698]: I1006 12:15:53.729643 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpfzb\" (UniqueName: \"kubernetes.io/projected/fdcfa9c6-8380-471e-a9bb-1368772713a5-kube-api-access-lpfzb\") pod \"fdcfa9c6-8380-471e-a9bb-1368772713a5\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " Oct 06 12:15:53 crc kubenswrapper[4698]: I1006 12:15:53.729880 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-inventory\") pod \"fdcfa9c6-8380-471e-a9bb-1368772713a5\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " Oct 06 12:15:53 crc kubenswrapper[4698]: I1006 12:15:53.729947 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-ssh-key\") pod \"fdcfa9c6-8380-471e-a9bb-1368772713a5\" (UID: \"fdcfa9c6-8380-471e-a9bb-1368772713a5\") " Oct 06 12:15:53 crc kubenswrapper[4698]: I1006 12:15:53.740532 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcfa9c6-8380-471e-a9bb-1368772713a5-kube-api-access-lpfzb" (OuterVolumeSpecName: "kube-api-access-lpfzb") pod "fdcfa9c6-8380-471e-a9bb-1368772713a5" (UID: "fdcfa9c6-8380-471e-a9bb-1368772713a5"). InnerVolumeSpecName "kube-api-access-lpfzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:15:53 crc kubenswrapper[4698]: I1006 12:15:53.785528 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-inventory" (OuterVolumeSpecName: "inventory") pod "fdcfa9c6-8380-471e-a9bb-1368772713a5" (UID: "fdcfa9c6-8380-471e-a9bb-1368772713a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:15:53 crc kubenswrapper[4698]: I1006 12:15:53.790658 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fdcfa9c6-8380-471e-a9bb-1368772713a5" (UID: "fdcfa9c6-8380-471e-a9bb-1368772713a5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:15:53 crc kubenswrapper[4698]: I1006 12:15:53.834306 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpfzb\" (UniqueName: \"kubernetes.io/projected/fdcfa9c6-8380-471e-a9bb-1368772713a5-kube-api-access-lpfzb\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:53 crc kubenswrapper[4698]: I1006 12:15:53.834530 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:53 crc kubenswrapper[4698]: I1006 12:15:53.834695 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fdcfa9c6-8380-471e-a9bb-1368772713a5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.158498 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" event={"ID":"fdcfa9c6-8380-471e-a9bb-1368772713a5","Type":"ContainerDied","Data":"713f7ca2bf50aa06f3a0dc86048fc0d5d95f977180c15aeca4c24b3b0c8911a3"} Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.158582 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="713f7ca2bf50aa06f3a0dc86048fc0d5d95f977180c15aeca4c24b3b0c8911a3" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.158741 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8lz9t" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.316820 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7"] Oct 06 12:15:54 crc kubenswrapper[4698]: E1006 12:15:54.317701 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4724d7-96d4-461f-a193-5af3518db60b" containerName="extract-utilities" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.317736 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4724d7-96d4-461f-a193-5af3518db60b" containerName="extract-utilities" Oct 06 12:15:54 crc kubenswrapper[4698]: E1006 12:15:54.317766 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcfa9c6-8380-471e-a9bb-1368772713a5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.317780 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcfa9c6-8380-471e-a9bb-1368772713a5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:15:54 crc kubenswrapper[4698]: E1006 12:15:54.317814 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4724d7-96d4-461f-a193-5af3518db60b" containerName="registry-server" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.317827 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4724d7-96d4-461f-a193-5af3518db60b" containerName="registry-server" Oct 06 12:15:54 crc kubenswrapper[4698]: E1006 12:15:54.317860 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4724d7-96d4-461f-a193-5af3518db60b" containerName="extract-content" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.317873 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4724d7-96d4-461f-a193-5af3518db60b" containerName="extract-content" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.318308 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4724d7-96d4-461f-a193-5af3518db60b" containerName="registry-server" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.318365 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcfa9c6-8380-471e-a9bb-1368772713a5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.319746 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.323806 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.324207 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.324479 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.324823 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.342536 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7"] Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.451637 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.451764 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.453490 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8z8\" (UniqueName: \"kubernetes.io/projected/be279c30-e0a4-4828-8e13-2375265bb01f-kube-api-access-fg8z8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.556893 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.557007 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.557291 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg8z8\" (UniqueName: \"kubernetes.io/projected/be279c30-e0a4-4828-8e13-2375265bb01f-kube-api-access-fg8z8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.564653 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.567229 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.589377 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg8z8\" (UniqueName: \"kubernetes.io/projected/be279c30-e0a4-4828-8e13-2375265bb01f-kube-api-access-fg8z8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:54 crc kubenswrapper[4698]: I1006 12:15:54.656323 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:15:55 crc kubenswrapper[4698]: I1006 12:15:55.315936 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7"] Oct 06 12:15:56 crc kubenswrapper[4698]: I1006 12:15:56.186584 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" event={"ID":"be279c30-e0a4-4828-8e13-2375265bb01f","Type":"ContainerStarted","Data":"532b046fc7b6cd51be6a1390d7e8e9d69678dd9a56d8dca1c32c72d22c7a03a9"} Oct 06 12:15:57 crc kubenswrapper[4698]: I1006 12:15:57.203616 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" event={"ID":"be279c30-e0a4-4828-8e13-2375265bb01f","Type":"ContainerStarted","Data":"40b4efe3a37f903632cc8a712afa1cccd2e19e592a8c7583cf11b1cf8dfc4151"} Oct 06 12:15:57 crc kubenswrapper[4698]: I1006 12:15:57.232400 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" podStartSLOduration=2.625100154 podStartE2EDuration="3.232375979s" podCreationTimestamp="2025-10-06 12:15:54 +0000 UTC" firstStartedPulling="2025-10-06 12:15:55.328869208 +0000 UTC m=+1842.741561371" lastFinishedPulling="2025-10-06 12:15:55.936144993 +0000 UTC m=+1843.348837196" observedRunningTime="2025-10-06 12:15:57.226076256 +0000 UTC m=+1844.638768429" watchObservedRunningTime="2025-10-06 12:15:57.232375979 +0000 UTC m=+1844.645068172" Oct 06 12:16:33 crc kubenswrapper[4698]: I1006 12:16:33.069903 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qcgff"] Oct 06 12:16:33 crc kubenswrapper[4698]: I1006 12:16:33.084248 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qcgff"] Oct 06 12:16:33 crc kubenswrapper[4698]: I1006 12:16:33.365273 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec80afd9-0c75-4270-9b9a-c9f0380a3a86" path="/var/lib/kubelet/pods/ec80afd9-0c75-4270-9b9a-c9f0380a3a86/volumes" Oct 06 12:16:42 crc kubenswrapper[4698]: I1006 12:16:42.605834 4698 scope.go:117] "RemoveContainer" containerID="27f205a545deab8cf7b769000f51c44b23c905ae882eac8c726f6c4bc6de7c42" Oct 06 12:16:42 crc kubenswrapper[4698]: I1006 12:16:42.687287 4698 scope.go:117] "RemoveContainer" containerID="41fe2948ac012ee93159502e051c4f083b86682be3350fab5a2a399b5353d058" Oct 06 12:16:42 crc kubenswrapper[4698]: I1006 12:16:42.737849 4698 scope.go:117] "RemoveContainer" containerID="a74e8cfe356a34926ecc11c993e83499c1c1d8a53f409820a7d75e011ea33b41" Oct 06 12:16:56 crc kubenswrapper[4698]: I1006 12:16:56.049373 4698 generic.go:334] "Generic (PLEG): container finished" podID="be279c30-e0a4-4828-8e13-2375265bb01f" containerID="40b4efe3a37f903632cc8a712afa1cccd2e19e592a8c7583cf11b1cf8dfc4151" exitCode=2 Oct 06 12:16:56 crc kubenswrapper[4698]: I1006 12:16:56.049608 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" event={"ID":"be279c30-e0a4-4828-8e13-2375265bb01f","Type":"ContainerDied","Data":"40b4efe3a37f903632cc8a712afa1cccd2e19e592a8c7583cf11b1cf8dfc4151"} Oct 06 12:16:57 crc kubenswrapper[4698]: I1006 12:16:57.576530 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:16:57 crc kubenswrapper[4698]: I1006 12:16:57.670850 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-inventory\") pod \"be279c30-e0a4-4828-8e13-2375265bb01f\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " Oct 06 12:16:57 crc kubenswrapper[4698]: I1006 12:16:57.671448 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg8z8\" (UniqueName: \"kubernetes.io/projected/be279c30-e0a4-4828-8e13-2375265bb01f-kube-api-access-fg8z8\") pod \"be279c30-e0a4-4828-8e13-2375265bb01f\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " Oct 06 12:16:57 crc kubenswrapper[4698]: I1006 12:16:57.671786 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-ssh-key\") pod \"be279c30-e0a4-4828-8e13-2375265bb01f\" (UID: \"be279c30-e0a4-4828-8e13-2375265bb01f\") " Oct 06 12:16:57 crc kubenswrapper[4698]: I1006 12:16:57.680279 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be279c30-e0a4-4828-8e13-2375265bb01f-kube-api-access-fg8z8" (OuterVolumeSpecName: "kube-api-access-fg8z8") pod "be279c30-e0a4-4828-8e13-2375265bb01f" (UID: "be279c30-e0a4-4828-8e13-2375265bb01f"). InnerVolumeSpecName "kube-api-access-fg8z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:16:57 crc kubenswrapper[4698]: I1006 12:16:57.754326 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-inventory" (OuterVolumeSpecName: "inventory") pod "be279c30-e0a4-4828-8e13-2375265bb01f" (UID: "be279c30-e0a4-4828-8e13-2375265bb01f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:16:57 crc kubenswrapper[4698]: I1006 12:16:57.757556 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "be279c30-e0a4-4828-8e13-2375265bb01f" (UID: "be279c30-e0a4-4828-8e13-2375265bb01f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:16:57 crc kubenswrapper[4698]: I1006 12:16:57.776220 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:16:57 crc kubenswrapper[4698]: I1006 12:16:57.776278 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg8z8\" (UniqueName: \"kubernetes.io/projected/be279c30-e0a4-4828-8e13-2375265bb01f-kube-api-access-fg8z8\") on node \"crc\" DevicePath \"\"" Oct 06 12:16:57 crc kubenswrapper[4698]: I1006 12:16:57.776301 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be279c30-e0a4-4828-8e13-2375265bb01f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:16:58 crc kubenswrapper[4698]: I1006 12:16:58.103499 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" event={"ID":"be279c30-e0a4-4828-8e13-2375265bb01f","Type":"ContainerDied","Data":"532b046fc7b6cd51be6a1390d7e8e9d69678dd9a56d8dca1c32c72d22c7a03a9"} Oct 06 12:16:58 crc kubenswrapper[4698]: I1006 12:16:58.103928 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="532b046fc7b6cd51be6a1390d7e8e9d69678dd9a56d8dca1c32c72d22c7a03a9" Oct 06 12:16:58 crc kubenswrapper[4698]: I1006 12:16:58.103608 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.042880 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg"] Oct 06 12:17:05 crc kubenswrapper[4698]: E1006 12:17:05.046405 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be279c30-e0a4-4828-8e13-2375265bb01f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.046439 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="be279c30-e0a4-4828-8e13-2375265bb01f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.046834 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="be279c30-e0a4-4828-8e13-2375265bb01f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.048052 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.051887 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.052105 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.053800 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.058135 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg"] Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.059298 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.239005 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5jclg\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.239155 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5jclg\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.240401 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6zfx\" (UniqueName: \"kubernetes.io/projected/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-kube-api-access-w6zfx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5jclg\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.344226 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5jclg\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.344757 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6zfx\" (UniqueName: \"kubernetes.io/projected/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-kube-api-access-w6zfx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5jclg\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.345104 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5jclg\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.353115 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5jclg\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.361915 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5jclg\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.380562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6zfx\" (UniqueName: \"kubernetes.io/projected/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-kube-api-access-w6zfx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5jclg\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:05 crc kubenswrapper[4698]: I1006 12:17:05.387714 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:17:06 crc kubenswrapper[4698]: I1006 12:17:06.082792 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg"] Oct 06 12:17:06 crc kubenswrapper[4698]: I1006 12:17:06.219936 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" event={"ID":"166970c1-3e73-47ca-b4c7-ea9c980ce7bb","Type":"ContainerStarted","Data":"63015a3a857f7c4abb327fb06b31795366531e65252f25fcdeb6a41cb1a231c7"} Oct 06 12:17:07 crc kubenswrapper[4698]: I1006 12:17:07.244465 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" event={"ID":"166970c1-3e73-47ca-b4c7-ea9c980ce7bb","Type":"ContainerStarted","Data":"d95d2bc31323286bf9432e8119f43a2d80db143e10d2f7b10cc21eeda5a31b52"} Oct 06 12:17:07 crc kubenswrapper[4698]: I1006 12:17:07.281695 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" podStartSLOduration=1.74518734 podStartE2EDuration="2.281675376s" podCreationTimestamp="2025-10-06 12:17:05 +0000 UTC" firstStartedPulling="2025-10-06 12:17:06.094494872 +0000 UTC m=+1913.507187055" lastFinishedPulling="2025-10-06 12:17:06.630982908 +0000 UTC m=+1914.043675091" observedRunningTime="2025-10-06 12:17:07.278652431 +0000 UTC m=+1914.691344614" watchObservedRunningTime="2025-10-06 12:17:07.281675376 +0000 UTC m=+1914.694367559" Oct 06 12:17:25 crc kubenswrapper[4698]: I1006 12:17:25.234907 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:17:25 crc kubenswrapper[4698]: I1006 12:17:25.235581 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:17:55 crc kubenswrapper[4698]: I1006 12:17:55.235034 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:17:55 crc kubenswrapper[4698]: I1006 12:17:55.235818 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:18:02 crc kubenswrapper[4698]: I1006 12:18:02.033400 4698 generic.go:334] "Generic (PLEG): container finished" podID="166970c1-3e73-47ca-b4c7-ea9c980ce7bb" containerID="d95d2bc31323286bf9432e8119f43a2d80db143e10d2f7b10cc21eeda5a31b52" exitCode=0 Oct 06 12:18:02 crc kubenswrapper[4698]: I1006 12:18:02.034554 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" event={"ID":"166970c1-3e73-47ca-b4c7-ea9c980ce7bb","Type":"ContainerDied","Data":"d95d2bc31323286bf9432e8119f43a2d80db143e10d2f7b10cc21eeda5a31b52"} Oct 06 12:18:03 crc kubenswrapper[4698]: I1006 12:18:03.642168 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:18:03 crc kubenswrapper[4698]: I1006 12:18:03.752379 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-ssh-key\") pod \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " Oct 06 12:18:03 crc kubenswrapper[4698]: I1006 12:18:03.752929 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6zfx\" (UniqueName: \"kubernetes.io/projected/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-kube-api-access-w6zfx\") pod \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " Oct 06 12:18:03 crc kubenswrapper[4698]: I1006 12:18:03.753068 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-inventory\") pod \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\" (UID: \"166970c1-3e73-47ca-b4c7-ea9c980ce7bb\") " Oct 06 12:18:03 crc kubenswrapper[4698]: I1006 12:18:03.761777 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-kube-api-access-w6zfx" (OuterVolumeSpecName: "kube-api-access-w6zfx") pod "166970c1-3e73-47ca-b4c7-ea9c980ce7bb" (UID: "166970c1-3e73-47ca-b4c7-ea9c980ce7bb"). InnerVolumeSpecName "kube-api-access-w6zfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:18:03 crc kubenswrapper[4698]: I1006 12:18:03.800531 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-inventory" (OuterVolumeSpecName: "inventory") pod "166970c1-3e73-47ca-b4c7-ea9c980ce7bb" (UID: "166970c1-3e73-47ca-b4c7-ea9c980ce7bb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:03 crc kubenswrapper[4698]: I1006 12:18:03.801200 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "166970c1-3e73-47ca-b4c7-ea9c980ce7bb" (UID: "166970c1-3e73-47ca-b4c7-ea9c980ce7bb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:03 crc kubenswrapper[4698]: I1006 12:18:03.856628 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:03 crc kubenswrapper[4698]: I1006 12:18:03.856688 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:03 crc kubenswrapper[4698]: I1006 12:18:03.856712 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6zfx\" (UniqueName: \"kubernetes.io/projected/166970c1-3e73-47ca-b4c7-ea9c980ce7bb-kube-api-access-w6zfx\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.062582 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" event={"ID":"166970c1-3e73-47ca-b4c7-ea9c980ce7bb","Type":"ContainerDied","Data":"63015a3a857f7c4abb327fb06b31795366531e65252f25fcdeb6a41cb1a231c7"} Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.062664 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63015a3a857f7c4abb327fb06b31795366531e65252f25fcdeb6a41cb1a231c7" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.062747 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5jclg" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.210553 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-z9v58"] Oct 06 12:18:04 crc kubenswrapper[4698]: E1006 12:18:04.211443 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="166970c1-3e73-47ca-b4c7-ea9c980ce7bb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.211482 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="166970c1-3e73-47ca-b4c7-ea9c980ce7bb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.211950 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="166970c1-3e73-47ca-b4c7-ea9c980ce7bb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.215431 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.218298 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.219602 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.220004 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.223880 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.226715 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-z9v58"] Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.266577 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45wz\" (UniqueName: \"kubernetes.io/projected/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-kube-api-access-s45wz\") pod \"ssh-known-hosts-edpm-deployment-z9v58\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.266666 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-z9v58\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.266732 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-z9v58\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.370185 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-z9v58\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.372803 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-z9v58\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.374824 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s45wz\" (UniqueName: \"kubernetes.io/projected/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-kube-api-access-s45wz\") pod \"ssh-known-hosts-edpm-deployment-z9v58\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.379604 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-z9v58\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.380857 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-z9v58\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.409927 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s45wz\" (UniqueName: \"kubernetes.io/projected/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-kube-api-access-s45wz\") pod \"ssh-known-hosts-edpm-deployment-z9v58\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:04 crc kubenswrapper[4698]: I1006 12:18:04.572799 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:05 crc kubenswrapper[4698]: I1006 12:18:05.241264 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-z9v58"] Oct 06 12:18:06 crc kubenswrapper[4698]: I1006 12:18:06.093311 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" event={"ID":"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9","Type":"ContainerStarted","Data":"54c5c3f781c9c4d3b9bf82f4793c56c9afb464b10471a0834a7d24402c77b1e8"} Oct 06 12:18:06 crc kubenswrapper[4698]: I1006 12:18:06.093990 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" event={"ID":"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9","Type":"ContainerStarted","Data":"3a0754d7818cbfa06e490ee841ec4a57f25bea44860c2f45126b19e8e80c4155"} Oct 06 12:18:06 crc kubenswrapper[4698]: I1006 12:18:06.125278 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" podStartSLOduration=1.672886882 podStartE2EDuration="2.125252141s" podCreationTimestamp="2025-10-06 12:18:04 +0000 UTC" firstStartedPulling="2025-10-06 12:18:05.258712047 +0000 UTC m=+1972.671404220" lastFinishedPulling="2025-10-06 12:18:05.711077296 +0000 UTC m=+1973.123769479" observedRunningTime="2025-10-06 12:18:06.116255942 +0000 UTC m=+1973.528948115" watchObservedRunningTime="2025-10-06 12:18:06.125252141 +0000 UTC m=+1973.537944324" Oct 06 12:18:14 crc kubenswrapper[4698]: I1006 12:18:14.214930 4698 generic.go:334] "Generic (PLEG): container finished" podID="2e1a78cf-8260-4c6c-88ed-fa72b63e10a9" containerID="54c5c3f781c9c4d3b9bf82f4793c56c9afb464b10471a0834a7d24402c77b1e8" exitCode=0 Oct 06 12:18:14 crc kubenswrapper[4698]: I1006 12:18:14.215061 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" event={"ID":"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9","Type":"ContainerDied","Data":"54c5c3f781c9c4d3b9bf82f4793c56c9afb464b10471a0834a7d24402c77b1e8"} Oct 06 12:18:15 crc kubenswrapper[4698]: I1006 12:18:15.781289 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:15 crc kubenswrapper[4698]: I1006 12:18:15.886860 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s45wz\" (UniqueName: \"kubernetes.io/projected/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-kube-api-access-s45wz\") pod \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " Oct 06 12:18:15 crc kubenswrapper[4698]: I1006 12:18:15.887072 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-inventory-0\") pod \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " Oct 06 12:18:15 crc kubenswrapper[4698]: I1006 12:18:15.887119 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-ssh-key-openstack-edpm-ipam\") pod \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\" (UID: \"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9\") " Oct 06 12:18:15 crc kubenswrapper[4698]: I1006 12:18:15.897372 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-kube-api-access-s45wz" (OuterVolumeSpecName: "kube-api-access-s45wz") pod "2e1a78cf-8260-4c6c-88ed-fa72b63e10a9" (UID: "2e1a78cf-8260-4c6c-88ed-fa72b63e10a9"). InnerVolumeSpecName "kube-api-access-s45wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:18:15 crc kubenswrapper[4698]: I1006 12:18:15.920276 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2e1a78cf-8260-4c6c-88ed-fa72b63e10a9" (UID: "2e1a78cf-8260-4c6c-88ed-fa72b63e10a9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:15 crc kubenswrapper[4698]: I1006 12:18:15.933225 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2e1a78cf-8260-4c6c-88ed-fa72b63e10a9" (UID: "2e1a78cf-8260-4c6c-88ed-fa72b63e10a9"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:15 crc kubenswrapper[4698]: I1006 12:18:15.990771 4698 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:15 crc kubenswrapper[4698]: I1006 12:18:15.990862 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:15 crc kubenswrapper[4698]: I1006 12:18:15.990889 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s45wz\" (UniqueName: \"kubernetes.io/projected/2e1a78cf-8260-4c6c-88ed-fa72b63e10a9-kube-api-access-s45wz\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.248610 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" event={"ID":"2e1a78cf-8260-4c6c-88ed-fa72b63e10a9","Type":"ContainerDied","Data":"3a0754d7818cbfa06e490ee841ec4a57f25bea44860c2f45126b19e8e80c4155"} Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.248688 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a0754d7818cbfa06e490ee841ec4a57f25bea44860c2f45126b19e8e80c4155" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.248681 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-z9v58" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.348700 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r"] Oct 06 12:18:16 crc kubenswrapper[4698]: E1006 12:18:16.349248 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1a78cf-8260-4c6c-88ed-fa72b63e10a9" containerName="ssh-known-hosts-edpm-deployment" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.349268 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1a78cf-8260-4c6c-88ed-fa72b63e10a9" containerName="ssh-known-hosts-edpm-deployment" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.349504 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1a78cf-8260-4c6c-88ed-fa72b63e10a9" containerName="ssh-known-hosts-edpm-deployment" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.350538 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.357213 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.359425 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.359426 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.359488 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.381043 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r"] Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.507794 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8896\" (UniqueName: \"kubernetes.io/projected/eab59609-328f-41d0-94e9-0f6bcd78eaa5-kube-api-access-c8896\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gr96r\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.507879 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gr96r\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.508322 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gr96r\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.612279 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gr96r\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.612543 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gr96r\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.612695 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8896\" (UniqueName: \"kubernetes.io/projected/eab59609-328f-41d0-94e9-0f6bcd78eaa5-kube-api-access-c8896\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gr96r\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.625003 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gr96r\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.625063 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gr96r\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.642554 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8896\" (UniqueName: \"kubernetes.io/projected/eab59609-328f-41d0-94e9-0f6bcd78eaa5-kube-api-access-c8896\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gr96r\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:16 crc kubenswrapper[4698]: I1006 12:18:16.718847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:17 crc kubenswrapper[4698]: I1006 12:18:17.350882 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r"] Oct 06 12:18:17 crc kubenswrapper[4698]: W1006 12:18:17.354959 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeab59609_328f_41d0_94e9_0f6bcd78eaa5.slice/crio-750769204dd9af559c106cba1ac0dcb115ee184419175b5e02248f8b6becbffb WatchSource:0}: Error finding container 750769204dd9af559c106cba1ac0dcb115ee184419175b5e02248f8b6becbffb: Status 404 returned error can't find the container with id 750769204dd9af559c106cba1ac0dcb115ee184419175b5e02248f8b6becbffb Oct 06 12:18:18 crc kubenswrapper[4698]: I1006 12:18:18.280564 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" event={"ID":"eab59609-328f-41d0-94e9-0f6bcd78eaa5","Type":"ContainerStarted","Data":"dd58efa37989611b9dd1db736a21d7284fc7df336e86c581195e707ca4e741da"} Oct 06 12:18:18 crc kubenswrapper[4698]: I1006 12:18:18.281037 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" event={"ID":"eab59609-328f-41d0-94e9-0f6bcd78eaa5","Type":"ContainerStarted","Data":"750769204dd9af559c106cba1ac0dcb115ee184419175b5e02248f8b6becbffb"} Oct 06 12:18:18 crc kubenswrapper[4698]: I1006 12:18:18.320390 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" podStartSLOduration=1.772699511 podStartE2EDuration="2.32035375s" podCreationTimestamp="2025-10-06 12:18:16 +0000 UTC" firstStartedPulling="2025-10-06 12:18:17.358945866 +0000 UTC m=+1984.771638039" lastFinishedPulling="2025-10-06 12:18:17.906600065 +0000 UTC m=+1985.319292278" observedRunningTime="2025-10-06 12:18:18.309904365 +0000 UTC m=+1985.722596558" watchObservedRunningTime="2025-10-06 12:18:18.32035375 +0000 UTC m=+1985.733045933" Oct 06 12:18:25 crc kubenswrapper[4698]: I1006 12:18:25.234998 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:18:25 crc kubenswrapper[4698]: I1006 12:18:25.236117 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:18:25 crc kubenswrapper[4698]: I1006 12:18:25.236207 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:18:25 crc kubenswrapper[4698]: I1006 12:18:25.237625 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44e1fed630ac0541961ac92a51ae0f5dee25c325829cdc321ce14730e5443530"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:18:25 crc kubenswrapper[4698]: I1006 12:18:25.237739 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://44e1fed630ac0541961ac92a51ae0f5dee25c325829cdc321ce14730e5443530" gracePeriod=600 Oct 06 12:18:26 crc kubenswrapper[4698]: I1006 12:18:26.394366 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="44e1fed630ac0541961ac92a51ae0f5dee25c325829cdc321ce14730e5443530" exitCode=0 Oct 06 12:18:26 crc kubenswrapper[4698]: I1006 12:18:26.394416 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"44e1fed630ac0541961ac92a51ae0f5dee25c325829cdc321ce14730e5443530"} Oct 06 12:18:26 crc kubenswrapper[4698]: I1006 12:18:26.395380 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b"} Oct 06 12:18:26 crc kubenswrapper[4698]: I1006 12:18:26.395413 4698 scope.go:117] "RemoveContainer" containerID="c77f816efe8855076188ad607302eeb19ad58b0f250c2ec628033a88f8ef7e50" Oct 06 12:18:29 crc kubenswrapper[4698]: I1006 12:18:29.446182 4698 generic.go:334] "Generic (PLEG): container finished" podID="eab59609-328f-41d0-94e9-0f6bcd78eaa5" containerID="dd58efa37989611b9dd1db736a21d7284fc7df336e86c581195e707ca4e741da" exitCode=0 Oct 06 12:18:29 crc kubenswrapper[4698]: I1006 12:18:29.446348 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" event={"ID":"eab59609-328f-41d0-94e9-0f6bcd78eaa5","Type":"ContainerDied","Data":"dd58efa37989611b9dd1db736a21d7284fc7df336e86c581195e707ca4e741da"} Oct 06 12:18:30 crc kubenswrapper[4698]: I1006 12:18:30.993097 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.118881 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-ssh-key\") pod \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.119108 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8896\" (UniqueName: \"kubernetes.io/projected/eab59609-328f-41d0-94e9-0f6bcd78eaa5-kube-api-access-c8896\") pod \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.119345 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-inventory\") pod \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\" (UID: \"eab59609-328f-41d0-94e9-0f6bcd78eaa5\") " Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.128180 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab59609-328f-41d0-94e9-0f6bcd78eaa5-kube-api-access-c8896" (OuterVolumeSpecName: "kube-api-access-c8896") pod "eab59609-328f-41d0-94e9-0f6bcd78eaa5" (UID: "eab59609-328f-41d0-94e9-0f6bcd78eaa5"). InnerVolumeSpecName "kube-api-access-c8896". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.170920 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-inventory" (OuterVolumeSpecName: "inventory") pod "eab59609-328f-41d0-94e9-0f6bcd78eaa5" (UID: "eab59609-328f-41d0-94e9-0f6bcd78eaa5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.172384 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eab59609-328f-41d0-94e9-0f6bcd78eaa5" (UID: "eab59609-328f-41d0-94e9-0f6bcd78eaa5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.235118 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8896\" (UniqueName: \"kubernetes.io/projected/eab59609-328f-41d0-94e9-0f6bcd78eaa5-kube-api-access-c8896\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.235162 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.235172 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eab59609-328f-41d0-94e9-0f6bcd78eaa5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.476940 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" event={"ID":"eab59609-328f-41d0-94e9-0f6bcd78eaa5","Type":"ContainerDied","Data":"750769204dd9af559c106cba1ac0dcb115ee184419175b5e02248f8b6becbffb"} Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.476994 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="750769204dd9af559c106cba1ac0dcb115ee184419175b5e02248f8b6becbffb" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.477078 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gr96r" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.589688 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x"] Oct 06 12:18:31 crc kubenswrapper[4698]: E1006 12:18:31.590825 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab59609-328f-41d0-94e9-0f6bcd78eaa5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.590875 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab59609-328f-41d0-94e9-0f6bcd78eaa5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.591358 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab59609-328f-41d0-94e9-0f6bcd78eaa5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.593287 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.597362 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.597448 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.599882 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.601304 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.603853 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x"] Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.754636 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.754752 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.754922 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqct5\" (UniqueName: \"kubernetes.io/projected/702cd121-45e6-44b8-bdc6-c97634e3307f-kube-api-access-hqct5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.861880 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.861990 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.862215 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqct5\" (UniqueName: \"kubernetes.io/projected/702cd121-45e6-44b8-bdc6-c97634e3307f-kube-api-access-hqct5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.869498 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.868679 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.896069 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqct5\" (UniqueName: \"kubernetes.io/projected/702cd121-45e6-44b8-bdc6-c97634e3307f-kube-api-access-hqct5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:31 crc kubenswrapper[4698]: I1006 12:18:31.917553 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:32 crc kubenswrapper[4698]: I1006 12:18:32.540127 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x"] Oct 06 12:18:32 crc kubenswrapper[4698]: W1006 12:18:32.562280 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod702cd121_45e6_44b8_bdc6_c97634e3307f.slice/crio-e249c00bab1465c52f52982656549f6d04c5cff89be7d8846a2537d3798dcefc WatchSource:0}: Error finding container e249c00bab1465c52f52982656549f6d04c5cff89be7d8846a2537d3798dcefc: Status 404 returned error can't find the container with id e249c00bab1465c52f52982656549f6d04c5cff89be7d8846a2537d3798dcefc Oct 06 12:18:33 crc kubenswrapper[4698]: I1006 12:18:33.508516 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" event={"ID":"702cd121-45e6-44b8-bdc6-c97634e3307f","Type":"ContainerStarted","Data":"e249c00bab1465c52f52982656549f6d04c5cff89be7d8846a2537d3798dcefc"} Oct 06 12:18:34 crc kubenswrapper[4698]: I1006 12:18:34.543641 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" event={"ID":"702cd121-45e6-44b8-bdc6-c97634e3307f","Type":"ContainerStarted","Data":"7ecdfd74b76b9b38a29bfdb4af304a15458bf67219a2f5eac09b2f697ce31cd2"} Oct 06 12:18:34 crc kubenswrapper[4698]: I1006 12:18:34.572390 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" podStartSLOduration=2.944020349 podStartE2EDuration="3.57236002s" podCreationTimestamp="2025-10-06 12:18:31 +0000 UTC" firstStartedPulling="2025-10-06 12:18:32.568221824 +0000 UTC m=+1999.980914017" lastFinishedPulling="2025-10-06 12:18:33.196561485 +0000 UTC m=+2000.609253688" observedRunningTime="2025-10-06 12:18:34.565208105 +0000 UTC m=+2001.977900288" watchObservedRunningTime="2025-10-06 12:18:34.57236002 +0000 UTC m=+2001.985052233" Oct 06 12:18:44 crc kubenswrapper[4698]: I1006 12:18:44.674396 4698 generic.go:334] "Generic (PLEG): container finished" podID="702cd121-45e6-44b8-bdc6-c97634e3307f" containerID="7ecdfd74b76b9b38a29bfdb4af304a15458bf67219a2f5eac09b2f697ce31cd2" exitCode=0 Oct 06 12:18:44 crc kubenswrapper[4698]: I1006 12:18:44.674450 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" event={"ID":"702cd121-45e6-44b8-bdc6-c97634e3307f","Type":"ContainerDied","Data":"7ecdfd74b76b9b38a29bfdb4af304a15458bf67219a2f5eac09b2f697ce31cd2"} Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.358609 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.513508 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-ssh-key\") pod \"702cd121-45e6-44b8-bdc6-c97634e3307f\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.513793 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqct5\" (UniqueName: \"kubernetes.io/projected/702cd121-45e6-44b8-bdc6-c97634e3307f-kube-api-access-hqct5\") pod \"702cd121-45e6-44b8-bdc6-c97634e3307f\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.513957 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-inventory\") pod \"702cd121-45e6-44b8-bdc6-c97634e3307f\" (UID: \"702cd121-45e6-44b8-bdc6-c97634e3307f\") " Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.535951 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/702cd121-45e6-44b8-bdc6-c97634e3307f-kube-api-access-hqct5" (OuterVolumeSpecName: "kube-api-access-hqct5") pod "702cd121-45e6-44b8-bdc6-c97634e3307f" (UID: "702cd121-45e6-44b8-bdc6-c97634e3307f"). InnerVolumeSpecName "kube-api-access-hqct5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.583657 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "702cd121-45e6-44b8-bdc6-c97634e3307f" (UID: "702cd121-45e6-44b8-bdc6-c97634e3307f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.585865 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-inventory" (OuterVolumeSpecName: "inventory") pod "702cd121-45e6-44b8-bdc6-c97634e3307f" (UID: "702cd121-45e6-44b8-bdc6-c97634e3307f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.618905 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.618997 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqct5\" (UniqueName: \"kubernetes.io/projected/702cd121-45e6-44b8-bdc6-c97634e3307f-kube-api-access-hqct5\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.619332 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/702cd121-45e6-44b8-bdc6-c97634e3307f-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.705888 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" event={"ID":"702cd121-45e6-44b8-bdc6-c97634e3307f","Type":"ContainerDied","Data":"e249c00bab1465c52f52982656549f6d04c5cff89be7d8846a2537d3798dcefc"} Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.705948 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e249c00bab1465c52f52982656549f6d04c5cff89be7d8846a2537d3798dcefc" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.706067 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.815947 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v"] Oct 06 12:18:46 crc kubenswrapper[4698]: E1006 12:18:46.817008 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702cd121-45e6-44b8-bdc6-c97634e3307f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.817081 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="702cd121-45e6-44b8-bdc6-c97634e3307f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.817565 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="702cd121-45e6-44b8-bdc6-c97634e3307f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.818854 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.830473 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.830848 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.831124 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.831417 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.831625 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.832068 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.832080 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.832615 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.844882 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v"] Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.927668 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxjfv\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-kube-api-access-bxjfv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.927735 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.927893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.928176 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.928264 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.928318 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.928523 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.928607 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.928798 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.929231 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.929351 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.929572 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.929627 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:46 crc kubenswrapper[4698]: I1006 12:18:46.929839 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.033257 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.033351 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.033440 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.033496 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxjfv\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-kube-api-access-bxjfv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.033600 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.034926 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.034997 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.035094 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.035148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.036708 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.036766 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.036886 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.037201 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.037271 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.042073 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.042909 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.043411 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.044086 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.045247 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.045309 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.045797 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.046467 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.046900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.047558 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.048659 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.049358 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.050452 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.061687 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxjfv\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-kube-api-access-bxjfv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dl55v\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.180832 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:18:47 crc kubenswrapper[4698]: I1006 12:18:47.860859 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v"] Oct 06 12:18:48 crc kubenswrapper[4698]: I1006 12:18:48.732131 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" event={"ID":"8624f3b8-45df-4efd-b49f-33838276c948","Type":"ContainerStarted","Data":"11b5c103845c64f005096742dd15e2e0870a5cb43402e5fbd7ac5e46b136c6e4"} Oct 06 12:18:49 crc kubenswrapper[4698]: I1006 12:18:49.750569 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" event={"ID":"8624f3b8-45df-4efd-b49f-33838276c948","Type":"ContainerStarted","Data":"0b6346ab5d82cb864135e5258049c7ac4cdec40473399a27cb80a5a4a0550a5e"} Oct 06 12:18:49 crc kubenswrapper[4698]: I1006 12:18:49.806401 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" podStartSLOduration=3.05772665 podStartE2EDuration="3.806362992s" podCreationTimestamp="2025-10-06 12:18:46 +0000 UTC" firstStartedPulling="2025-10-06 12:18:47.864220622 +0000 UTC m=+2015.276912825" lastFinishedPulling="2025-10-06 12:18:48.612856984 +0000 UTC m=+2016.025549167" observedRunningTime="2025-10-06 12:18:49.786567778 +0000 UTC m=+2017.199260011" watchObservedRunningTime="2025-10-06 12:18:49.806362992 +0000 UTC m=+2017.219055205" Oct 06 12:19:35 crc kubenswrapper[4698]: I1006 12:19:35.345949 4698 generic.go:334] "Generic (PLEG): container finished" podID="8624f3b8-45df-4efd-b49f-33838276c948" containerID="0b6346ab5d82cb864135e5258049c7ac4cdec40473399a27cb80a5a4a0550a5e" exitCode=0 Oct 06 12:19:35 crc kubenswrapper[4698]: I1006 12:19:35.358656 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" event={"ID":"8624f3b8-45df-4efd-b49f-33838276c948","Type":"ContainerDied","Data":"0b6346ab5d82cb864135e5258049c7ac4cdec40473399a27cb80a5a4a0550a5e"} Oct 06 12:19:36 crc kubenswrapper[4698]: I1006 12:19:36.915976 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034004 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-libvirt-combined-ca-bundle\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034098 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-repo-setup-combined-ca-bundle\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034174 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-inventory\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034207 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-telemetry-combined-ca-bundle\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034331 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ovn-combined-ca-bundle\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034786 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-nova-combined-ca-bundle\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034831 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034878 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxjfv\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-kube-api-access-bxjfv\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034898 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ssh-key\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034972 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.034993 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-neutron-metadata-combined-ca-bundle\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.035031 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-ovn-default-certs-0\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.035051 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.035115 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-bootstrap-combined-ca-bundle\") pod \"8624f3b8-45df-4efd-b49f-33838276c948\" (UID: \"8624f3b8-45df-4efd-b49f-33838276c948\") " Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.042818 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.044442 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.047139 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.049170 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.051234 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.051305 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-kube-api-access-bxjfv" (OuterVolumeSpecName: "kube-api-access-bxjfv") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "kube-api-access-bxjfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.051453 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.051891 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.052150 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.052616 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.054073 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.054281 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.093408 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-inventory" (OuterVolumeSpecName: "inventory") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.097865 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8624f3b8-45df-4efd-b49f-33838276c948" (UID: "8624f3b8-45df-4efd-b49f-33838276c948"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144529 4698 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144557 4698 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144572 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144583 4698 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144595 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144603 4698 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144613 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144622 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxjfv\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-kube-api-access-bxjfv\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144630 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144640 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144649 4698 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144660 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144672 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8624f3b8-45df-4efd-b49f-33838276c948-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.144681 4698 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8624f3b8-45df-4efd-b49f-33838276c948-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.375971 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" event={"ID":"8624f3b8-45df-4efd-b49f-33838276c948","Type":"ContainerDied","Data":"11b5c103845c64f005096742dd15e2e0870a5cb43402e5fbd7ac5e46b136c6e4"} Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.376180 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11b5c103845c64f005096742dd15e2e0870a5cb43402e5fbd7ac5e46b136c6e4" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.376429 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dl55v" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.563519 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg"] Oct 06 12:19:37 crc kubenswrapper[4698]: E1006 12:19:37.564141 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8624f3b8-45df-4efd-b49f-33838276c948" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.564164 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8624f3b8-45df-4efd-b49f-33838276c948" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.564411 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8624f3b8-45df-4efd-b49f-33838276c948" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.565409 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.569332 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.569577 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.569800 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.569961 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.576985 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg"] Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.583811 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.661370 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.661473 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.661716 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4112723d-ae85-4f84-867e-9219f74672ff-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.661791 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.661837 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4bdv\" (UniqueName: \"kubernetes.io/projected/4112723d-ae85-4f84-867e-9219f74672ff-kube-api-access-j4bdv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.765887 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.766285 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.766624 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4112723d-ae85-4f84-867e-9219f74672ff-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.766928 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.766961 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4bdv\" (UniqueName: \"kubernetes.io/projected/4112723d-ae85-4f84-867e-9219f74672ff-kube-api-access-j4bdv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.768502 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4112723d-ae85-4f84-867e-9219f74672ff-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.773604 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.773607 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.775743 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.798342 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4bdv\" (UniqueName: \"kubernetes.io/projected/4112723d-ae85-4f84-867e-9219f74672ff-kube-api-access-j4bdv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gklpg\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:37 crc kubenswrapper[4698]: I1006 12:19:37.892389 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:19:38 crc kubenswrapper[4698]: I1006 12:19:38.621902 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg"] Oct 06 12:19:39 crc kubenswrapper[4698]: I1006 12:19:39.405222 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" event={"ID":"4112723d-ae85-4f84-867e-9219f74672ff","Type":"ContainerStarted","Data":"60a244edcd42bd19121759526e2cd267e36635ec214b590bb2c221ac320c9e40"} Oct 06 12:19:39 crc kubenswrapper[4698]: I1006 12:19:39.405715 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" event={"ID":"4112723d-ae85-4f84-867e-9219f74672ff","Type":"ContainerStarted","Data":"da992d72f2e681ffb3515c61e3d37758e9af33eedc8d4118eef44e8e84ba5c66"} Oct 06 12:19:39 crc kubenswrapper[4698]: I1006 12:19:39.441003 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" podStartSLOduration=1.986390828 podStartE2EDuration="2.440976342s" podCreationTimestamp="2025-10-06 12:19:37 +0000 UTC" firstStartedPulling="2025-10-06 12:19:38.642267125 +0000 UTC m=+2066.054959318" lastFinishedPulling="2025-10-06 12:19:39.096852659 +0000 UTC m=+2066.509544832" observedRunningTime="2025-10-06 12:19:39.422863859 +0000 UTC m=+2066.835556032" watchObservedRunningTime="2025-10-06 12:19:39.440976342 +0000 UTC m=+2066.853668515" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.476558 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wtv29"] Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.481861 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.498424 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtv29"] Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.586183 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhclw\" (UniqueName: \"kubernetes.io/projected/f6769801-33b9-4cad-b072-bd60387be8e7-kube-api-access-vhclw\") pod \"community-operators-wtv29\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.586296 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-catalog-content\") pod \"community-operators-wtv29\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.586434 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-utilities\") pod \"community-operators-wtv29\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.688942 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhclw\" (UniqueName: \"kubernetes.io/projected/f6769801-33b9-4cad-b072-bd60387be8e7-kube-api-access-vhclw\") pod \"community-operators-wtv29\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.689618 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-catalog-content\") pod \"community-operators-wtv29\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.689772 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-utilities\") pod \"community-operators-wtv29\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.690292 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-catalog-content\") pod \"community-operators-wtv29\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.690410 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-utilities\") pod \"community-operators-wtv29\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.719315 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhclw\" (UniqueName: \"kubernetes.io/projected/f6769801-33b9-4cad-b072-bd60387be8e7-kube-api-access-vhclw\") pod \"community-operators-wtv29\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:48 crc kubenswrapper[4698]: I1006 12:19:48.817048 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:49 crc kubenswrapper[4698]: I1006 12:19:49.424250 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtv29"] Oct 06 12:19:49 crc kubenswrapper[4698]: W1006 12:19:49.435926 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6769801_33b9_4cad_b072_bd60387be8e7.slice/crio-31f84e5db829c5a0e4357d4183648918905ede27c680c9f17dcc4c6f1d740273 WatchSource:0}: Error finding container 31f84e5db829c5a0e4357d4183648918905ede27c680c9f17dcc4c6f1d740273: Status 404 returned error can't find the container with id 31f84e5db829c5a0e4357d4183648918905ede27c680c9f17dcc4c6f1d740273 Oct 06 12:19:49 crc kubenswrapper[4698]: I1006 12:19:49.598583 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtv29" event={"ID":"f6769801-33b9-4cad-b072-bd60387be8e7","Type":"ContainerStarted","Data":"31f84e5db829c5a0e4357d4183648918905ede27c680c9f17dcc4c6f1d740273"} Oct 06 12:19:50 crc kubenswrapper[4698]: I1006 12:19:50.617038 4698 generic.go:334] "Generic (PLEG): container finished" podID="f6769801-33b9-4cad-b072-bd60387be8e7" containerID="656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d" exitCode=0 Oct 06 12:19:50 crc kubenswrapper[4698]: I1006 12:19:50.617236 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtv29" event={"ID":"f6769801-33b9-4cad-b072-bd60387be8e7","Type":"ContainerDied","Data":"656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d"} Oct 06 12:19:51 crc kubenswrapper[4698]: I1006 12:19:51.630754 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtv29" event={"ID":"f6769801-33b9-4cad-b072-bd60387be8e7","Type":"ContainerStarted","Data":"cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8"} Oct 06 12:19:52 crc kubenswrapper[4698]: I1006 12:19:52.646322 4698 generic.go:334] "Generic (PLEG): container finished" podID="f6769801-33b9-4cad-b072-bd60387be8e7" containerID="cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8" exitCode=0 Oct 06 12:19:52 crc kubenswrapper[4698]: I1006 12:19:52.646507 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtv29" event={"ID":"f6769801-33b9-4cad-b072-bd60387be8e7","Type":"ContainerDied","Data":"cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8"} Oct 06 12:19:53 crc kubenswrapper[4698]: I1006 12:19:53.660694 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtv29" event={"ID":"f6769801-33b9-4cad-b072-bd60387be8e7","Type":"ContainerStarted","Data":"e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a"} Oct 06 12:19:53 crc kubenswrapper[4698]: I1006 12:19:53.694897 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wtv29" podStartSLOduration=3.094347407 podStartE2EDuration="5.694875613s" podCreationTimestamp="2025-10-06 12:19:48 +0000 UTC" firstStartedPulling="2025-10-06 12:19:50.622980333 +0000 UTC m=+2078.035672546" lastFinishedPulling="2025-10-06 12:19:53.223508579 +0000 UTC m=+2080.636200752" observedRunningTime="2025-10-06 12:19:53.692091265 +0000 UTC m=+2081.104783438" watchObservedRunningTime="2025-10-06 12:19:53.694875613 +0000 UTC m=+2081.107567786" Oct 06 12:19:58 crc kubenswrapper[4698]: I1006 12:19:58.817414 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:58 crc kubenswrapper[4698]: I1006 12:19:58.818321 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:58 crc kubenswrapper[4698]: I1006 12:19:58.893470 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:59 crc kubenswrapper[4698]: I1006 12:19:59.830733 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:19:59 crc kubenswrapper[4698]: I1006 12:19:59.904758 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wtv29"] Oct 06 12:20:01 crc kubenswrapper[4698]: I1006 12:20:01.766208 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wtv29" podUID="f6769801-33b9-4cad-b072-bd60387be8e7" containerName="registry-server" containerID="cri-o://e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a" gracePeriod=2 Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.371644 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.501948 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-utilities\") pod \"f6769801-33b9-4cad-b072-bd60387be8e7\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.502025 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhclw\" (UniqueName: \"kubernetes.io/projected/f6769801-33b9-4cad-b072-bd60387be8e7-kube-api-access-vhclw\") pod \"f6769801-33b9-4cad-b072-bd60387be8e7\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.502490 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-catalog-content\") pod \"f6769801-33b9-4cad-b072-bd60387be8e7\" (UID: \"f6769801-33b9-4cad-b072-bd60387be8e7\") " Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.502805 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-utilities" (OuterVolumeSpecName: "utilities") pod "f6769801-33b9-4cad-b072-bd60387be8e7" (UID: "f6769801-33b9-4cad-b072-bd60387be8e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.503910 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.510893 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6769801-33b9-4cad-b072-bd60387be8e7-kube-api-access-vhclw" (OuterVolumeSpecName: "kube-api-access-vhclw") pod "f6769801-33b9-4cad-b072-bd60387be8e7" (UID: "f6769801-33b9-4cad-b072-bd60387be8e7"). InnerVolumeSpecName "kube-api-access-vhclw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.562275 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6769801-33b9-4cad-b072-bd60387be8e7" (UID: "f6769801-33b9-4cad-b072-bd60387be8e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.607732 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6769801-33b9-4cad-b072-bd60387be8e7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.607808 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhclw\" (UniqueName: \"kubernetes.io/projected/f6769801-33b9-4cad-b072-bd60387be8e7-kube-api-access-vhclw\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.780914 4698 generic.go:334] "Generic (PLEG): container finished" podID="f6769801-33b9-4cad-b072-bd60387be8e7" containerID="e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a" exitCode=0 Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.780982 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtv29" event={"ID":"f6769801-33b9-4cad-b072-bd60387be8e7","Type":"ContainerDied","Data":"e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a"} Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.781043 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtv29" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.781078 4698 scope.go:117] "RemoveContainer" containerID="e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.781061 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtv29" event={"ID":"f6769801-33b9-4cad-b072-bd60387be8e7","Type":"ContainerDied","Data":"31f84e5db829c5a0e4357d4183648918905ede27c680c9f17dcc4c6f1d740273"} Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.821551 4698 scope.go:117] "RemoveContainer" containerID="cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.837106 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wtv29"] Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.849726 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wtv29"] Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.877122 4698 scope.go:117] "RemoveContainer" containerID="656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.914939 4698 scope.go:117] "RemoveContainer" containerID="e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a" Oct 06 12:20:02 crc kubenswrapper[4698]: E1006 12:20:02.915748 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a\": container with ID starting with e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a not found: ID does not exist" containerID="e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.915827 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a"} err="failed to get container status \"e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a\": rpc error: code = NotFound desc = could not find container \"e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a\": container with ID starting with e521f45892bb1ee3dbed7db9ca5b0b802048fbe223c6e3871aeabaed5e07f60a not found: ID does not exist" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.915876 4698 scope.go:117] "RemoveContainer" containerID="cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8" Oct 06 12:20:02 crc kubenswrapper[4698]: E1006 12:20:02.916434 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8\": container with ID starting with cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8 not found: ID does not exist" containerID="cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.916572 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8"} err="failed to get container status \"cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8\": rpc error: code = NotFound desc = could not find container \"cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8\": container with ID starting with cfd4686843ced2b45a4db13a3c6e0d75f22c2e0c97d7812ab353252807f172a8 not found: ID does not exist" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.916680 4698 scope.go:117] "RemoveContainer" containerID="656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d" Oct 06 12:20:02 crc kubenswrapper[4698]: E1006 12:20:02.917181 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d\": container with ID starting with 656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d not found: ID does not exist" containerID="656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d" Oct 06 12:20:02 crc kubenswrapper[4698]: I1006 12:20:02.917235 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d"} err="failed to get container status \"656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d\": rpc error: code = NotFound desc = could not find container \"656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d\": container with ID starting with 656fb1372b05850e5024a2b9903e07909ffdab2800a67c75cacc47fe1947503d not found: ID does not exist" Oct 06 12:20:03 crc kubenswrapper[4698]: I1006 12:20:03.344368 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6769801-33b9-4cad-b072-bd60387be8e7" path="/var/lib/kubelet/pods/f6769801-33b9-4cad-b072-bd60387be8e7/volumes" Oct 06 12:20:25 crc kubenswrapper[4698]: I1006 12:20:25.235260 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:20:25 crc kubenswrapper[4698]: I1006 12:20:25.236397 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:20:55 crc kubenswrapper[4698]: I1006 12:20:55.234982 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:20:55 crc kubenswrapper[4698]: I1006 12:20:55.236097 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:20:56 crc kubenswrapper[4698]: I1006 12:20:56.478946 4698 generic.go:334] "Generic (PLEG): container finished" podID="4112723d-ae85-4f84-867e-9219f74672ff" containerID="60a244edcd42bd19121759526e2cd267e36635ec214b590bb2c221ac320c9e40" exitCode=0 Oct 06 12:20:56 crc kubenswrapper[4698]: I1006 12:20:56.479101 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" event={"ID":"4112723d-ae85-4f84-867e-9219f74672ff","Type":"ContainerDied","Data":"60a244edcd42bd19121759526e2cd267e36635ec214b590bb2c221ac320c9e40"} Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.028555 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.085234 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ovn-combined-ca-bundle\") pod \"4112723d-ae85-4f84-867e-9219f74672ff\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.085301 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-inventory\") pod \"4112723d-ae85-4f84-867e-9219f74672ff\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.085354 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4112723d-ae85-4f84-867e-9219f74672ff-ovncontroller-config-0\") pod \"4112723d-ae85-4f84-867e-9219f74672ff\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.085573 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4bdv\" (UniqueName: \"kubernetes.io/projected/4112723d-ae85-4f84-867e-9219f74672ff-kube-api-access-j4bdv\") pod \"4112723d-ae85-4f84-867e-9219f74672ff\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.085720 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ssh-key\") pod \"4112723d-ae85-4f84-867e-9219f74672ff\" (UID: \"4112723d-ae85-4f84-867e-9219f74672ff\") " Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.094077 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4112723d-ae85-4f84-867e-9219f74672ff-kube-api-access-j4bdv" (OuterVolumeSpecName: "kube-api-access-j4bdv") pod "4112723d-ae85-4f84-867e-9219f74672ff" (UID: "4112723d-ae85-4f84-867e-9219f74672ff"). InnerVolumeSpecName "kube-api-access-j4bdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.094974 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4112723d-ae85-4f84-867e-9219f74672ff" (UID: "4112723d-ae85-4f84-867e-9219f74672ff"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.123823 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4112723d-ae85-4f84-867e-9219f74672ff-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "4112723d-ae85-4f84-867e-9219f74672ff" (UID: "4112723d-ae85-4f84-867e-9219f74672ff"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.134781 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-inventory" (OuterVolumeSpecName: "inventory") pod "4112723d-ae85-4f84-867e-9219f74672ff" (UID: "4112723d-ae85-4f84-867e-9219f74672ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.136469 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4112723d-ae85-4f84-867e-9219f74672ff" (UID: "4112723d-ae85-4f84-867e-9219f74672ff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.189122 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.189160 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.189171 4698 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4112723d-ae85-4f84-867e-9219f74672ff-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.189181 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4bdv\" (UniqueName: \"kubernetes.io/projected/4112723d-ae85-4f84-867e-9219f74672ff-kube-api-access-j4bdv\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.189194 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4112723d-ae85-4f84-867e-9219f74672ff-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.509459 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" event={"ID":"4112723d-ae85-4f84-867e-9219f74672ff","Type":"ContainerDied","Data":"da992d72f2e681ffb3515c61e3d37758e9af33eedc8d4118eef44e8e84ba5c66"} Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.509525 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gklpg" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.509528 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da992d72f2e681ffb3515c61e3d37758e9af33eedc8d4118eef44e8e84ba5c66" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.814319 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h"] Oct 06 12:20:58 crc kubenswrapper[4698]: E1006 12:20:58.815082 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4112723d-ae85-4f84-867e-9219f74672ff" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.815119 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4112723d-ae85-4f84-867e-9219f74672ff" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 06 12:20:58 crc kubenswrapper[4698]: E1006 12:20:58.815175 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6769801-33b9-4cad-b072-bd60387be8e7" containerName="extract-content" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.815190 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6769801-33b9-4cad-b072-bd60387be8e7" containerName="extract-content" Oct 06 12:20:58 crc kubenswrapper[4698]: E1006 12:20:58.815236 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6769801-33b9-4cad-b072-bd60387be8e7" containerName="registry-server" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.815255 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6769801-33b9-4cad-b072-bd60387be8e7" containerName="registry-server" Oct 06 12:20:58 crc kubenswrapper[4698]: E1006 12:20:58.815283 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6769801-33b9-4cad-b072-bd60387be8e7" containerName="extract-utilities" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.815297 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6769801-33b9-4cad-b072-bd60387be8e7" containerName="extract-utilities" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.815695 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6769801-33b9-4cad-b072-bd60387be8e7" containerName="registry-server" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.815730 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4112723d-ae85-4f84-867e-9219f74672ff" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.817117 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.835590 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.835903 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.836058 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.836230 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.836375 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.836533 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.844650 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h"] Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.908873 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.908927 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x8kb\" (UniqueName: \"kubernetes.io/projected/c824e0ef-121e-428f-bf96-f9e1c87e57c6-kube-api-access-4x8kb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.908952 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.908972 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.909001 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:58 crc kubenswrapper[4698]: I1006 12:20:58.909056 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.010900 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.010970 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x8kb\" (UniqueName: \"kubernetes.io/projected/c824e0ef-121e-428f-bf96-f9e1c87e57c6-kube-api-access-4x8kb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.011002 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.011054 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.011090 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.012173 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.018471 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.019310 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.021436 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.021471 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.024907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.044394 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x8kb\" (UniqueName: \"kubernetes.io/projected/c824e0ef-121e-428f-bf96-f9e1c87e57c6-kube-api-access-4x8kb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.162669 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.788801 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h"] Oct 06 12:20:59 crc kubenswrapper[4698]: W1006 12:20:59.799231 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc824e0ef_121e_428f_bf96_f9e1c87e57c6.slice/crio-b3e0a431685cc81fac792e4240ebf90f710b0d94defcc9b6b1a1adf8f28a8106 WatchSource:0}: Error finding container b3e0a431685cc81fac792e4240ebf90f710b0d94defcc9b6b1a1adf8f28a8106: Status 404 returned error can't find the container with id b3e0a431685cc81fac792e4240ebf90f710b0d94defcc9b6b1a1adf8f28a8106 Oct 06 12:20:59 crc kubenswrapper[4698]: I1006 12:20:59.803240 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:21:00 crc kubenswrapper[4698]: I1006 12:21:00.535641 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" event={"ID":"c824e0ef-121e-428f-bf96-f9e1c87e57c6","Type":"ContainerStarted","Data":"b3e0a431685cc81fac792e4240ebf90f710b0d94defcc9b6b1a1adf8f28a8106"} Oct 06 12:21:01 crc kubenswrapper[4698]: I1006 12:21:01.551219 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" event={"ID":"c824e0ef-121e-428f-bf96-f9e1c87e57c6","Type":"ContainerStarted","Data":"1ef4b9128baa420b701c267f15a7fc8dcbef9fe45758545497e8c0c2e888af07"} Oct 06 12:21:01 crc kubenswrapper[4698]: I1006 12:21:01.586321 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" podStartSLOduration=3.075074487 podStartE2EDuration="3.586291723s" podCreationTimestamp="2025-10-06 12:20:58 +0000 UTC" firstStartedPulling="2025-10-06 12:20:59.802912381 +0000 UTC m=+2147.215604554" lastFinishedPulling="2025-10-06 12:21:00.314129617 +0000 UTC m=+2147.726821790" observedRunningTime="2025-10-06 12:21:01.573439235 +0000 UTC m=+2148.986131428" watchObservedRunningTime="2025-10-06 12:21:01.586291723 +0000 UTC m=+2148.998983916" Oct 06 12:21:25 crc kubenswrapper[4698]: I1006 12:21:25.235497 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:21:25 crc kubenswrapper[4698]: I1006 12:21:25.236808 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:21:25 crc kubenswrapper[4698]: I1006 12:21:25.236919 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:21:25 crc kubenswrapper[4698]: I1006 12:21:25.238369 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:21:25 crc kubenswrapper[4698]: I1006 12:21:25.238516 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" gracePeriod=600 Oct 06 12:21:25 crc kubenswrapper[4698]: E1006 12:21:25.373149 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:21:25 crc kubenswrapper[4698]: I1006 12:21:25.871723 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" exitCode=0 Oct 06 12:21:25 crc kubenswrapper[4698]: I1006 12:21:25.871793 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b"} Oct 06 12:21:25 crc kubenswrapper[4698]: I1006 12:21:25.871842 4698 scope.go:117] "RemoveContainer" containerID="44e1fed630ac0541961ac92a51ae0f5dee25c325829cdc321ce14730e5443530" Oct 06 12:21:25 crc kubenswrapper[4698]: I1006 12:21:25.873075 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:21:25 crc kubenswrapper[4698]: E1006 12:21:25.873617 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:21:38 crc kubenswrapper[4698]: I1006 12:21:38.329896 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:21:38 crc kubenswrapper[4698]: E1006 12:21:38.330964 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:21:52 crc kubenswrapper[4698]: I1006 12:21:52.329370 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:21:52 crc kubenswrapper[4698]: E1006 12:21:52.330328 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:22:01 crc kubenswrapper[4698]: I1006 12:22:01.362902 4698 generic.go:334] "Generic (PLEG): container finished" podID="c824e0ef-121e-428f-bf96-f9e1c87e57c6" containerID="1ef4b9128baa420b701c267f15a7fc8dcbef9fe45758545497e8c0c2e888af07" exitCode=0 Oct 06 12:22:01 crc kubenswrapper[4698]: I1006 12:22:01.362998 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" event={"ID":"c824e0ef-121e-428f-bf96-f9e1c87e57c6","Type":"ContainerDied","Data":"1ef4b9128baa420b701c267f15a7fc8dcbef9fe45758545497e8c0c2e888af07"} Oct 06 12:22:02 crc kubenswrapper[4698]: I1006 12:22:02.933982 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:22:02 crc kubenswrapper[4698]: I1006 12:22:02.973151 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-metadata-combined-ca-bundle\") pod \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " Oct 06 12:22:02 crc kubenswrapper[4698]: I1006 12:22:02.973238 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x8kb\" (UniqueName: \"kubernetes.io/projected/c824e0ef-121e-428f-bf96-f9e1c87e57c6-kube-api-access-4x8kb\") pod \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " Oct 06 12:22:02 crc kubenswrapper[4698]: I1006 12:22:02.973270 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-inventory\") pod \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " Oct 06 12:22:02 crc kubenswrapper[4698]: I1006 12:22:02.973306 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-ssh-key\") pod \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " Oct 06 12:22:02 crc kubenswrapper[4698]: I1006 12:22:02.973368 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-nova-metadata-neutron-config-0\") pod \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " Oct 06 12:22:02 crc kubenswrapper[4698]: I1006 12:22:02.973419 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\" (UID: \"c824e0ef-121e-428f-bf96-f9e1c87e57c6\") " Oct 06 12:22:02 crc kubenswrapper[4698]: I1006 12:22:02.983785 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c824e0ef-121e-428f-bf96-f9e1c87e57c6" (UID: "c824e0ef-121e-428f-bf96-f9e1c87e57c6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:22:02 crc kubenswrapper[4698]: I1006 12:22:02.987668 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c824e0ef-121e-428f-bf96-f9e1c87e57c6-kube-api-access-4x8kb" (OuterVolumeSpecName: "kube-api-access-4x8kb") pod "c824e0ef-121e-428f-bf96-f9e1c87e57c6" (UID: "c824e0ef-121e-428f-bf96-f9e1c87e57c6"). InnerVolumeSpecName "kube-api-access-4x8kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.018196 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c824e0ef-121e-428f-bf96-f9e1c87e57c6" (UID: "c824e0ef-121e-428f-bf96-f9e1c87e57c6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.026064 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-inventory" (OuterVolumeSpecName: "inventory") pod "c824e0ef-121e-428f-bf96-f9e1c87e57c6" (UID: "c824e0ef-121e-428f-bf96-f9e1c87e57c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.039075 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c824e0ef-121e-428f-bf96-f9e1c87e57c6" (UID: "c824e0ef-121e-428f-bf96-f9e1c87e57c6"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.053096 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c824e0ef-121e-428f-bf96-f9e1c87e57c6" (UID: "c824e0ef-121e-428f-bf96-f9e1c87e57c6"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.077040 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x8kb\" (UniqueName: \"kubernetes.io/projected/c824e0ef-121e-428f-bf96-f9e1c87e57c6-kube-api-access-4x8kb\") on node \"crc\" DevicePath \"\"" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.077085 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.077098 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.077114 4698 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.077129 4698 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.077142 4698 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c824e0ef-121e-428f-bf96-f9e1c87e57c6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.388896 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" event={"ID":"c824e0ef-121e-428f-bf96-f9e1c87e57c6","Type":"ContainerDied","Data":"b3e0a431685cc81fac792e4240ebf90f710b0d94defcc9b6b1a1adf8f28a8106"} Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.388946 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e0a431685cc81fac792e4240ebf90f710b0d94defcc9b6b1a1adf8f28a8106" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.389072 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.627361 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9"] Oct 06 12:22:03 crc kubenswrapper[4698]: E1006 12:22:03.634531 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c824e0ef-121e-428f-bf96-f9e1c87e57c6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.634748 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c824e0ef-121e-428f-bf96-f9e1c87e57c6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.636989 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c824e0ef-121e-428f-bf96-f9e1c87e57c6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.639364 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.643303 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.643316 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.643875 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.644274 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.647866 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.653924 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9"] Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.691798 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh52q\" (UniqueName: \"kubernetes.io/projected/7a102252-962d-4cb3-970b-acd2557e633e-kube-api-access-nh52q\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.691942 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.692074 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.692154 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.692225 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.793912 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.793966 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.794006 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.794150 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh52q\" (UniqueName: \"kubernetes.io/projected/7a102252-962d-4cb3-970b-acd2557e633e-kube-api-access-nh52q\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.794219 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.799292 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.799418 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.799868 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.800327 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.812990 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh52q\" (UniqueName: \"kubernetes.io/projected/7a102252-962d-4cb3-970b-acd2557e633e-kube-api-access-nh52q\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:03 crc kubenswrapper[4698]: I1006 12:22:03.962774 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:22:04 crc kubenswrapper[4698]: I1006 12:22:04.598921 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9"] Oct 06 12:22:05 crc kubenswrapper[4698]: I1006 12:22:05.414257 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" event={"ID":"7a102252-962d-4cb3-970b-acd2557e633e","Type":"ContainerStarted","Data":"dd0d5727f7934eab4321d90dd263f7b6d03a6adff30a29220878f876d612e40d"} Oct 06 12:22:06 crc kubenswrapper[4698]: I1006 12:22:06.331292 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:22:06 crc kubenswrapper[4698]: E1006 12:22:06.332625 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:22:06 crc kubenswrapper[4698]: I1006 12:22:06.433444 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" event={"ID":"7a102252-962d-4cb3-970b-acd2557e633e","Type":"ContainerStarted","Data":"bf4461f1a7fc29ad8a90cb0bcbbaff6ee4b621b9645d2855c01fdd031513461d"} Oct 06 12:22:06 crc kubenswrapper[4698]: I1006 12:22:06.477913 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" podStartSLOduration=2.830119877 podStartE2EDuration="3.47788188s" podCreationTimestamp="2025-10-06 12:22:03 +0000 UTC" firstStartedPulling="2025-10-06 12:22:04.609420313 +0000 UTC m=+2212.022112486" lastFinishedPulling="2025-10-06 12:22:05.257182276 +0000 UTC m=+2212.669874489" observedRunningTime="2025-10-06 12:22:06.459759292 +0000 UTC m=+2213.872451505" watchObservedRunningTime="2025-10-06 12:22:06.47788188 +0000 UTC m=+2213.890574083" Oct 06 12:22:19 crc kubenswrapper[4698]: I1006 12:22:19.330422 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:22:19 crc kubenswrapper[4698]: E1006 12:22:19.331795 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:22:34 crc kubenswrapper[4698]: I1006 12:22:34.330195 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:22:34 crc kubenswrapper[4698]: E1006 12:22:34.331244 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:22:48 crc kubenswrapper[4698]: I1006 12:22:48.329974 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:22:48 crc kubenswrapper[4698]: E1006 12:22:48.331342 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:23:03 crc kubenswrapper[4698]: I1006 12:23:03.335674 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:23:03 crc kubenswrapper[4698]: E1006 12:23:03.336802 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:23:16 crc kubenswrapper[4698]: I1006 12:23:16.329157 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:23:16 crc kubenswrapper[4698]: E1006 12:23:16.330143 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:23:30 crc kubenswrapper[4698]: I1006 12:23:30.329970 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:23:30 crc kubenswrapper[4698]: E1006 12:23:30.331035 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:23:45 crc kubenswrapper[4698]: I1006 12:23:45.329887 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:23:45 crc kubenswrapper[4698]: E1006 12:23:45.331576 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:23:58 crc kubenswrapper[4698]: I1006 12:23:58.329803 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:23:58 crc kubenswrapper[4698]: E1006 12:23:58.330606 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:24:09 crc kubenswrapper[4698]: I1006 12:24:09.330136 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:24:09 crc kubenswrapper[4698]: E1006 12:24:09.330819 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:24:21 crc kubenswrapper[4698]: I1006 12:24:21.330305 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:24:21 crc kubenswrapper[4698]: E1006 12:24:21.331509 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:24:35 crc kubenswrapper[4698]: I1006 12:24:35.329478 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:24:35 crc kubenswrapper[4698]: E1006 12:24:35.330582 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:24:50 crc kubenswrapper[4698]: I1006 12:24:50.331048 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:24:50 crc kubenswrapper[4698]: E1006 12:24:50.332391 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:25:04 crc kubenswrapper[4698]: I1006 12:25:04.330747 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:25:04 crc kubenswrapper[4698]: E1006 12:25:04.338601 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:25:17 crc kubenswrapper[4698]: I1006 12:25:17.329458 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:25:17 crc kubenswrapper[4698]: E1006 12:25:17.330860 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:25:32 crc kubenswrapper[4698]: I1006 12:25:32.330030 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:25:32 crc kubenswrapper[4698]: E1006 12:25:32.334126 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:25:45 crc kubenswrapper[4698]: I1006 12:25:45.329552 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:25:45 crc kubenswrapper[4698]: E1006 12:25:45.330793 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.245575 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dl8g4"] Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.251463 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.278638 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl8g4"] Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.422929 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-catalog-content\") pod \"redhat-marketplace-dl8g4\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.422995 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-utilities\") pod \"redhat-marketplace-dl8g4\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.423069 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7r6d\" (UniqueName: \"kubernetes.io/projected/9248c6a7-f43b-438d-8a1f-06e4a7028da1-kube-api-access-v7r6d\") pod \"redhat-marketplace-dl8g4\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.526623 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-catalog-content\") pod \"redhat-marketplace-dl8g4\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.526697 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-utilities\") pod \"redhat-marketplace-dl8g4\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.526770 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7r6d\" (UniqueName: \"kubernetes.io/projected/9248c6a7-f43b-438d-8a1f-06e4a7028da1-kube-api-access-v7r6d\") pod \"redhat-marketplace-dl8g4\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.527304 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-catalog-content\") pod \"redhat-marketplace-dl8g4\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.527647 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-utilities\") pod \"redhat-marketplace-dl8g4\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.556027 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7r6d\" (UniqueName: \"kubernetes.io/projected/9248c6a7-f43b-438d-8a1f-06e4a7028da1-kube-api-access-v7r6d\") pod \"redhat-marketplace-dl8g4\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.588257 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:49 crc kubenswrapper[4698]: I1006 12:25:49.888884 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl8g4"] Oct 06 12:25:50 crc kubenswrapper[4698]: I1006 12:25:50.503407 4698 generic.go:334] "Generic (PLEG): container finished" podID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerID="f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98" exitCode=0 Oct 06 12:25:50 crc kubenswrapper[4698]: I1006 12:25:50.503601 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl8g4" event={"ID":"9248c6a7-f43b-438d-8a1f-06e4a7028da1","Type":"ContainerDied","Data":"f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98"} Oct 06 12:25:50 crc kubenswrapper[4698]: I1006 12:25:50.504237 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl8g4" event={"ID":"9248c6a7-f43b-438d-8a1f-06e4a7028da1","Type":"ContainerStarted","Data":"16c84cbb5ea6189a0e84918824d01026e6d9674b276f2edde96b2d4a06474582"} Oct 06 12:25:52 crc kubenswrapper[4698]: I1006 12:25:52.537340 4698 generic.go:334] "Generic (PLEG): container finished" podID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerID="23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182" exitCode=0 Oct 06 12:25:52 crc kubenswrapper[4698]: I1006 12:25:52.537530 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl8g4" event={"ID":"9248c6a7-f43b-438d-8a1f-06e4a7028da1","Type":"ContainerDied","Data":"23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182"} Oct 06 12:25:53 crc kubenswrapper[4698]: I1006 12:25:53.558105 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl8g4" event={"ID":"9248c6a7-f43b-438d-8a1f-06e4a7028da1","Type":"ContainerStarted","Data":"a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407"} Oct 06 12:25:53 crc kubenswrapper[4698]: I1006 12:25:53.615855 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dl8g4" podStartSLOduration=2.023171105 podStartE2EDuration="4.615831125s" podCreationTimestamp="2025-10-06 12:25:49 +0000 UTC" firstStartedPulling="2025-10-06 12:25:50.506091444 +0000 UTC m=+2437.918783648" lastFinishedPulling="2025-10-06 12:25:53.098751475 +0000 UTC m=+2440.511443668" observedRunningTime="2025-10-06 12:25:53.600559647 +0000 UTC m=+2441.013251820" watchObservedRunningTime="2025-10-06 12:25:53.615831125 +0000 UTC m=+2441.028523308" Oct 06 12:25:59 crc kubenswrapper[4698]: I1006 12:25:59.589283 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:59 crc kubenswrapper[4698]: I1006 12:25:59.589625 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:59 crc kubenswrapper[4698]: I1006 12:25:59.640792 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:59 crc kubenswrapper[4698]: I1006 12:25:59.699005 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:25:59 crc kubenswrapper[4698]: I1006 12:25:59.882279 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl8g4"] Oct 06 12:26:00 crc kubenswrapper[4698]: I1006 12:26:00.328782 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:26:00 crc kubenswrapper[4698]: E1006 12:26:00.329061 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:26:01 crc kubenswrapper[4698]: I1006 12:26:01.654867 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dl8g4" podUID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerName="registry-server" containerID="cri-o://a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407" gracePeriod=2 Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.166202 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.234787 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7r6d\" (UniqueName: \"kubernetes.io/projected/9248c6a7-f43b-438d-8a1f-06e4a7028da1-kube-api-access-v7r6d\") pod \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.234895 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-catalog-content\") pod \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.235098 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-utilities\") pod \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\" (UID: \"9248c6a7-f43b-438d-8a1f-06e4a7028da1\") " Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.236691 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-utilities" (OuterVolumeSpecName: "utilities") pod "9248c6a7-f43b-438d-8a1f-06e4a7028da1" (UID: "9248c6a7-f43b-438d-8a1f-06e4a7028da1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.252532 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9248c6a7-f43b-438d-8a1f-06e4a7028da1-kube-api-access-v7r6d" (OuterVolumeSpecName: "kube-api-access-v7r6d") pod "9248c6a7-f43b-438d-8a1f-06e4a7028da1" (UID: "9248c6a7-f43b-438d-8a1f-06e4a7028da1"). InnerVolumeSpecName "kube-api-access-v7r6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.255345 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9248c6a7-f43b-438d-8a1f-06e4a7028da1" (UID: "9248c6a7-f43b-438d-8a1f-06e4a7028da1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.337331 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.337377 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7r6d\" (UniqueName: \"kubernetes.io/projected/9248c6a7-f43b-438d-8a1f-06e4a7028da1-kube-api-access-v7r6d\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.337393 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9248c6a7-f43b-438d-8a1f-06e4a7028da1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.667627 4698 generic.go:334] "Generic (PLEG): container finished" podID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerID="a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407" exitCode=0 Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.667695 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl8g4" event={"ID":"9248c6a7-f43b-438d-8a1f-06e4a7028da1","Type":"ContainerDied","Data":"a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407"} Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.667728 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl8g4" event={"ID":"9248c6a7-f43b-438d-8a1f-06e4a7028da1","Type":"ContainerDied","Data":"16c84cbb5ea6189a0e84918824d01026e6d9674b276f2edde96b2d4a06474582"} Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.667746 4698 scope.go:117] "RemoveContainer" containerID="a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.667947 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl8g4" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.702064 4698 scope.go:117] "RemoveContainer" containerID="23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.708485 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl8g4"] Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.720986 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl8g4"] Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.741247 4698 scope.go:117] "RemoveContainer" containerID="f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.785493 4698 scope.go:117] "RemoveContainer" containerID="a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407" Oct 06 12:26:02 crc kubenswrapper[4698]: E1006 12:26:02.786087 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407\": container with ID starting with a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407 not found: ID does not exist" containerID="a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.786134 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407"} err="failed to get container status \"a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407\": rpc error: code = NotFound desc = could not find container \"a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407\": container with ID starting with a757f39bf4d5ed45882be0834909f50042be7ddce3a23da25c76605031e0f407 not found: ID does not exist" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.786155 4698 scope.go:117] "RemoveContainer" containerID="23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182" Oct 06 12:26:02 crc kubenswrapper[4698]: E1006 12:26:02.786693 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182\": container with ID starting with 23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182 not found: ID does not exist" containerID="23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.786716 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182"} err="failed to get container status \"23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182\": rpc error: code = NotFound desc = could not find container \"23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182\": container with ID starting with 23a83baf1fbd40f36a5813e7a7013f60d2863f04619e94bf4a89ce2aa0c73182 not found: ID does not exist" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.786730 4698 scope.go:117] "RemoveContainer" containerID="f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98" Oct 06 12:26:02 crc kubenswrapper[4698]: E1006 12:26:02.787177 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98\": container with ID starting with f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98 not found: ID does not exist" containerID="f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98" Oct 06 12:26:02 crc kubenswrapper[4698]: I1006 12:26:02.787202 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98"} err="failed to get container status \"f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98\": rpc error: code = NotFound desc = could not find container \"f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98\": container with ID starting with f0ac5020282ea2bd0fd53f5933df5369d18e6da10da8dd5229a3bcd219cefa98 not found: ID does not exist" Oct 06 12:26:03 crc kubenswrapper[4698]: I1006 12:26:03.347456 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" path="/var/lib/kubelet/pods/9248c6a7-f43b-438d-8a1f-06e4a7028da1/volumes" Oct 06 12:26:13 crc kubenswrapper[4698]: I1006 12:26:13.379471 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:26:13 crc kubenswrapper[4698]: E1006 12:26:13.383047 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:26:25 crc kubenswrapper[4698]: I1006 12:26:25.329556 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:26:25 crc kubenswrapper[4698]: I1006 12:26:25.974449 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"6101d87b92f52608ccd8fec10e65d5aed0bfe3b24f6ea8cb7bdf04a66e58dd94"} Oct 06 12:26:54 crc kubenswrapper[4698]: I1006 12:26:54.382512 4698 generic.go:334] "Generic (PLEG): container finished" podID="7a102252-962d-4cb3-970b-acd2557e633e" containerID="bf4461f1a7fc29ad8a90cb0bcbbaff6ee4b621b9645d2855c01fdd031513461d" exitCode=0 Oct 06 12:26:54 crc kubenswrapper[4698]: I1006 12:26:54.382801 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" event={"ID":"7a102252-962d-4cb3-970b-acd2557e633e","Type":"ContainerDied","Data":"bf4461f1a7fc29ad8a90cb0bcbbaff6ee4b621b9645d2855c01fdd031513461d"} Oct 06 12:26:55 crc kubenswrapper[4698]: I1006 12:26:55.986059 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.083620 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-secret-0\") pod \"7a102252-962d-4cb3-970b-acd2557e633e\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.083709 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-ssh-key\") pod \"7a102252-962d-4cb3-970b-acd2557e633e\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.083909 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh52q\" (UniqueName: \"kubernetes.io/projected/7a102252-962d-4cb3-970b-acd2557e633e-kube-api-access-nh52q\") pod \"7a102252-962d-4cb3-970b-acd2557e633e\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.084058 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-inventory\") pod \"7a102252-962d-4cb3-970b-acd2557e633e\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.084277 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-combined-ca-bundle\") pod \"7a102252-962d-4cb3-970b-acd2557e633e\" (UID: \"7a102252-962d-4cb3-970b-acd2557e633e\") " Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.094438 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7a102252-962d-4cb3-970b-acd2557e633e" (UID: "7a102252-962d-4cb3-970b-acd2557e633e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.104610 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a102252-962d-4cb3-970b-acd2557e633e-kube-api-access-nh52q" (OuterVolumeSpecName: "kube-api-access-nh52q") pod "7a102252-962d-4cb3-970b-acd2557e633e" (UID: "7a102252-962d-4cb3-970b-acd2557e633e"). InnerVolumeSpecName "kube-api-access-nh52q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.125597 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-inventory" (OuterVolumeSpecName: "inventory") pod "7a102252-962d-4cb3-970b-acd2557e633e" (UID: "7a102252-962d-4cb3-970b-acd2557e633e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.128664 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7a102252-962d-4cb3-970b-acd2557e633e" (UID: "7a102252-962d-4cb3-970b-acd2557e633e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.149555 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7a102252-962d-4cb3-970b-acd2557e633e" (UID: "7a102252-962d-4cb3-970b-acd2557e633e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.187994 4698 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.188055 4698 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.188068 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.188087 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh52q\" (UniqueName: \"kubernetes.io/projected/7a102252-962d-4cb3-970b-acd2557e633e-kube-api-access-nh52q\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.188100 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a102252-962d-4cb3-970b-acd2557e633e-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.412980 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" event={"ID":"7a102252-962d-4cb3-970b-acd2557e633e","Type":"ContainerDied","Data":"dd0d5727f7934eab4321d90dd263f7b6d03a6adff30a29220878f876d612e40d"} Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.413489 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd0d5727f7934eab4321d90dd263f7b6d03a6adff30a29220878f876d612e40d" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.413116 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.604391 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf"] Oct 06 12:26:56 crc kubenswrapper[4698]: E1006 12:26:56.605042 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerName="extract-utilities" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.605069 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerName="extract-utilities" Oct 06 12:26:56 crc kubenswrapper[4698]: E1006 12:26:56.605098 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerName="extract-content" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.605109 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerName="extract-content" Oct 06 12:26:56 crc kubenswrapper[4698]: E1006 12:26:56.605147 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a102252-962d-4cb3-970b-acd2557e633e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.605158 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a102252-962d-4cb3-970b-acd2557e633e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 06 12:26:56 crc kubenswrapper[4698]: E1006 12:26:56.605176 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerName="registry-server" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.605184 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerName="registry-server" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.605476 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a102252-962d-4cb3-970b-acd2557e633e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.605505 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9248c6a7-f43b-438d-8a1f-06e4a7028da1" containerName="registry-server" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.606689 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.610589 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.611543 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.612598 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.613614 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.613638 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.613640 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.613960 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.634054 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf"] Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.804031 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.804125 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.804156 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtl6r\" (UniqueName: \"kubernetes.io/projected/9853ba7c-85b2-4a97-ac8c-80be3f979248-kube-api-access-gtl6r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.804187 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.804211 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.804242 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.804261 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.804284 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.804320 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.908742 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.908894 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtl6r\" (UniqueName: \"kubernetes.io/projected/9853ba7c-85b2-4a97-ac8c-80be3f979248-kube-api-access-gtl6r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.909061 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.909129 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.909302 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.909361 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.910222 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.910591 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.911044 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.911761 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.915762 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.915991 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.916357 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.916587 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.918582 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.924819 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.925233 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:56 crc kubenswrapper[4698]: I1006 12:26:56.943697 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtl6r\" (UniqueName: \"kubernetes.io/projected/9853ba7c-85b2-4a97-ac8c-80be3f979248-kube-api-access-gtl6r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-786mf\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:57 crc kubenswrapper[4698]: I1006 12:26:57.236154 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:26:57 crc kubenswrapper[4698]: I1006 12:26:57.856689 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf"] Oct 06 12:26:57 crc kubenswrapper[4698]: I1006 12:26:57.864909 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:26:58 crc kubenswrapper[4698]: I1006 12:26:58.444991 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" event={"ID":"9853ba7c-85b2-4a97-ac8c-80be3f979248","Type":"ContainerStarted","Data":"84eaedd7ec03c2ef0fbeff67376d57a29eda006a45f4c1cbedc59ef0b704f49f"} Oct 06 12:26:59 crc kubenswrapper[4698]: I1006 12:26:59.460926 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" event={"ID":"9853ba7c-85b2-4a97-ac8c-80be3f979248","Type":"ContainerStarted","Data":"6963d751425092dd2a2b283a41f7447ede0aff59c70ddbc2f5b2881ffe6b9f71"} Oct 06 12:26:59 crc kubenswrapper[4698]: I1006 12:26:59.494652 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" podStartSLOduration=3.075700826 podStartE2EDuration="3.494620304s" podCreationTimestamp="2025-10-06 12:26:56 +0000 UTC" firstStartedPulling="2025-10-06 12:26:57.864641303 +0000 UTC m=+2505.277333486" lastFinishedPulling="2025-10-06 12:26:58.283560791 +0000 UTC m=+2505.696252964" observedRunningTime="2025-10-06 12:26:59.488237606 +0000 UTC m=+2506.900929789" watchObservedRunningTime="2025-10-06 12:26:59.494620304 +0000 UTC m=+2506.907312507" Oct 06 12:28:25 crc kubenswrapper[4698]: I1006 12:28:25.237003 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:28:25 crc kubenswrapper[4698]: I1006 12:28:25.237682 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.314466 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h4fnh"] Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.320078 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.363123 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4fnh"] Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.455883 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzrkz\" (UniqueName: \"kubernetes.io/projected/65eef7fb-c2d4-4553-8c08-2f25c34961ee-kube-api-access-xzrkz\") pod \"certified-operators-h4fnh\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.455952 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-utilities\") pod \"certified-operators-h4fnh\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.456571 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-catalog-content\") pod \"certified-operators-h4fnh\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.558952 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrkz\" (UniqueName: \"kubernetes.io/projected/65eef7fb-c2d4-4553-8c08-2f25c34961ee-kube-api-access-xzrkz\") pod \"certified-operators-h4fnh\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.559038 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-utilities\") pod \"certified-operators-h4fnh\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.559123 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-catalog-content\") pod \"certified-operators-h4fnh\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.559668 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-catalog-content\") pod \"certified-operators-h4fnh\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.559891 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-utilities\") pod \"certified-operators-h4fnh\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.582462 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrkz\" (UniqueName: \"kubernetes.io/projected/65eef7fb-c2d4-4553-8c08-2f25c34961ee-kube-api-access-xzrkz\") pod \"certified-operators-h4fnh\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:41 crc kubenswrapper[4698]: I1006 12:28:41.665667 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:42 crc kubenswrapper[4698]: I1006 12:28:42.191773 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4fnh"] Oct 06 12:28:42 crc kubenswrapper[4698]: I1006 12:28:42.844618 4698 generic.go:334] "Generic (PLEG): container finished" podID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerID="9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6" exitCode=0 Oct 06 12:28:42 crc kubenswrapper[4698]: I1006 12:28:42.844710 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4fnh" event={"ID":"65eef7fb-c2d4-4553-8c08-2f25c34961ee","Type":"ContainerDied","Data":"9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6"} Oct 06 12:28:42 crc kubenswrapper[4698]: I1006 12:28:42.844768 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4fnh" event={"ID":"65eef7fb-c2d4-4553-8c08-2f25c34961ee","Type":"ContainerStarted","Data":"b5a4c5d17bcad621620ce622ebb1e8c5f0741c77fc7860a6d16509095c94930e"} Oct 06 12:28:44 crc kubenswrapper[4698]: I1006 12:28:44.870452 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4fnh" event={"ID":"65eef7fb-c2d4-4553-8c08-2f25c34961ee","Type":"ContainerStarted","Data":"49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c"} Oct 06 12:28:45 crc kubenswrapper[4698]: I1006 12:28:45.885794 4698 generic.go:334] "Generic (PLEG): container finished" podID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerID="49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c" exitCode=0 Oct 06 12:28:45 crc kubenswrapper[4698]: I1006 12:28:45.885878 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4fnh" event={"ID":"65eef7fb-c2d4-4553-8c08-2f25c34961ee","Type":"ContainerDied","Data":"49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c"} Oct 06 12:28:46 crc kubenswrapper[4698]: I1006 12:28:46.929142 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4fnh" event={"ID":"65eef7fb-c2d4-4553-8c08-2f25c34961ee","Type":"ContainerStarted","Data":"43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17"} Oct 06 12:28:46 crc kubenswrapper[4698]: I1006 12:28:46.958745 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h4fnh" podStartSLOduration=2.424522536 podStartE2EDuration="5.958711852s" podCreationTimestamp="2025-10-06 12:28:41 +0000 UTC" firstStartedPulling="2025-10-06 12:28:42.847369457 +0000 UTC m=+2610.260061640" lastFinishedPulling="2025-10-06 12:28:46.381558773 +0000 UTC m=+2613.794250956" observedRunningTime="2025-10-06 12:28:46.952234772 +0000 UTC m=+2614.364926985" watchObservedRunningTime="2025-10-06 12:28:46.958711852 +0000 UTC m=+2614.371404045" Oct 06 12:28:51 crc kubenswrapper[4698]: I1006 12:28:51.666287 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:51 crc kubenswrapper[4698]: I1006 12:28:51.667035 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:51 crc kubenswrapper[4698]: I1006 12:28:51.758832 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:52 crc kubenswrapper[4698]: I1006 12:28:52.090776 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:52 crc kubenswrapper[4698]: I1006 12:28:52.184121 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4fnh"] Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.031208 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h4fnh" podUID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerName="registry-server" containerID="cri-o://43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17" gracePeriod=2 Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.569427 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.659773 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-catalog-content\") pod \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.660446 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzrkz\" (UniqueName: \"kubernetes.io/projected/65eef7fb-c2d4-4553-8c08-2f25c34961ee-kube-api-access-xzrkz\") pod \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.660583 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-utilities\") pod \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\" (UID: \"65eef7fb-c2d4-4553-8c08-2f25c34961ee\") " Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.661442 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-utilities" (OuterVolumeSpecName: "utilities") pod "65eef7fb-c2d4-4553-8c08-2f25c34961ee" (UID: "65eef7fb-c2d4-4553-8c08-2f25c34961ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.662064 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.670288 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65eef7fb-c2d4-4553-8c08-2f25c34961ee-kube-api-access-xzrkz" (OuterVolumeSpecName: "kube-api-access-xzrkz") pod "65eef7fb-c2d4-4553-8c08-2f25c34961ee" (UID: "65eef7fb-c2d4-4553-8c08-2f25c34961ee"). InnerVolumeSpecName "kube-api-access-xzrkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.727750 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65eef7fb-c2d4-4553-8c08-2f25c34961ee" (UID: "65eef7fb-c2d4-4553-8c08-2f25c34961ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.764432 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzrkz\" (UniqueName: \"kubernetes.io/projected/65eef7fb-c2d4-4553-8c08-2f25c34961ee-kube-api-access-xzrkz\") on node \"crc\" DevicePath \"\"" Oct 06 12:28:54 crc kubenswrapper[4698]: I1006 12:28:54.764489 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65eef7fb-c2d4-4553-8c08-2f25c34961ee-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.047770 4698 generic.go:334] "Generic (PLEG): container finished" podID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerID="43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17" exitCode=0 Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.048030 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4fnh" event={"ID":"65eef7fb-c2d4-4553-8c08-2f25c34961ee","Type":"ContainerDied","Data":"43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17"} Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.048627 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4fnh" event={"ID":"65eef7fb-c2d4-4553-8c08-2f25c34961ee","Type":"ContainerDied","Data":"b5a4c5d17bcad621620ce622ebb1e8c5f0741c77fc7860a6d16509095c94930e"} Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.048160 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4fnh" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.048713 4698 scope.go:117] "RemoveContainer" containerID="43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.101836 4698 scope.go:117] "RemoveContainer" containerID="49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.113817 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4fnh"] Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.132899 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h4fnh"] Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.135564 4698 scope.go:117] "RemoveContainer" containerID="9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.189365 4698 scope.go:117] "RemoveContainer" containerID="43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17" Oct 06 12:28:55 crc kubenswrapper[4698]: E1006 12:28:55.189739 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17\": container with ID starting with 43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17 not found: ID does not exist" containerID="43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.189807 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17"} err="failed to get container status \"43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17\": rpc error: code = NotFound desc = could not find container \"43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17\": container with ID starting with 43f9d7521b169605fb13aa09ff0e6ce3b652af255d9dae165c46995888b3ab17 not found: ID does not exist" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.189846 4698 scope.go:117] "RemoveContainer" containerID="49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c" Oct 06 12:28:55 crc kubenswrapper[4698]: E1006 12:28:55.190231 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c\": container with ID starting with 49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c not found: ID does not exist" containerID="49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.190255 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c"} err="failed to get container status \"49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c\": rpc error: code = NotFound desc = could not find container \"49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c\": container with ID starting with 49750d83722bfdb42a6a46f2de347be1ebfb5956257773b0e1fa25337bd53d9c not found: ID does not exist" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.190273 4698 scope.go:117] "RemoveContainer" containerID="9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6" Oct 06 12:28:55 crc kubenswrapper[4698]: E1006 12:28:55.190489 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6\": container with ID starting with 9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6 not found: ID does not exist" containerID="9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.190511 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6"} err="failed to get container status \"9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6\": rpc error: code = NotFound desc = could not find container \"9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6\": container with ID starting with 9cec763fcb95aca19990624ec17160496a6373de442ff07d069076d1c254eac6 not found: ID does not exist" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.235293 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.235409 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:28:55 crc kubenswrapper[4698]: I1006 12:28:55.346526 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" path="/var/lib/kubelet/pods/65eef7fb-c2d4-4553-8c08-2f25c34961ee/volumes" Oct 06 12:29:25 crc kubenswrapper[4698]: I1006 12:29:25.235941 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:29:25 crc kubenswrapper[4698]: I1006 12:29:25.236943 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:29:25 crc kubenswrapper[4698]: I1006 12:29:25.237047 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:29:25 crc kubenswrapper[4698]: I1006 12:29:25.238349 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6101d87b92f52608ccd8fec10e65d5aed0bfe3b24f6ea8cb7bdf04a66e58dd94"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:29:25 crc kubenswrapper[4698]: I1006 12:29:25.238542 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://6101d87b92f52608ccd8fec10e65d5aed0bfe3b24f6ea8cb7bdf04a66e58dd94" gracePeriod=600 Oct 06 12:29:25 crc kubenswrapper[4698]: I1006 12:29:25.450030 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="6101d87b92f52608ccd8fec10e65d5aed0bfe3b24f6ea8cb7bdf04a66e58dd94" exitCode=0 Oct 06 12:29:25 crc kubenswrapper[4698]: I1006 12:29:25.450080 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"6101d87b92f52608ccd8fec10e65d5aed0bfe3b24f6ea8cb7bdf04a66e58dd94"} Oct 06 12:29:25 crc kubenswrapper[4698]: I1006 12:29:25.450124 4698 scope.go:117] "RemoveContainer" containerID="1243d02597de98a06580f878f4f48c2ac576400a1464099d79d5e36d7f89eb9b" Oct 06 12:29:26 crc kubenswrapper[4698]: I1006 12:29:26.462839 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e"} Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.226005 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-khhhn"] Oct 06 12:29:27 crc kubenswrapper[4698]: E1006 12:29:27.227278 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerName="registry-server" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.227305 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerName="registry-server" Oct 06 12:29:27 crc kubenswrapper[4698]: E1006 12:29:27.227331 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerName="extract-utilities" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.227339 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerName="extract-utilities" Oct 06 12:29:27 crc kubenswrapper[4698]: E1006 12:29:27.227357 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerName="extract-content" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.227365 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerName="extract-content" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.227639 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="65eef7fb-c2d4-4553-8c08-2f25c34961ee" containerName="registry-server" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.229497 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.245597 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khhhn"] Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.263273 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-utilities\") pod \"redhat-operators-khhhn\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.263405 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbtfx\" (UniqueName: \"kubernetes.io/projected/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-kube-api-access-fbtfx\") pod \"redhat-operators-khhhn\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.263643 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-catalog-content\") pod \"redhat-operators-khhhn\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.366058 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbtfx\" (UniqueName: \"kubernetes.io/projected/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-kube-api-access-fbtfx\") pod \"redhat-operators-khhhn\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.366209 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-catalog-content\") pod \"redhat-operators-khhhn\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.366322 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-utilities\") pod \"redhat-operators-khhhn\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.366962 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-utilities\") pod \"redhat-operators-khhhn\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.367165 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-catalog-content\") pod \"redhat-operators-khhhn\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.389834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbtfx\" (UniqueName: \"kubernetes.io/projected/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-kube-api-access-fbtfx\") pod \"redhat-operators-khhhn\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:27 crc kubenswrapper[4698]: I1006 12:29:27.558977 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:28 crc kubenswrapper[4698]: W1006 12:29:28.111768 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4c205a9_fc79_4c6e_bc42_f60c4d6934df.slice/crio-38ae470658c1d10403e7c3e36d8e625448c21117b8812e11394eb690ad6eae1f WatchSource:0}: Error finding container 38ae470658c1d10403e7c3e36d8e625448c21117b8812e11394eb690ad6eae1f: Status 404 returned error can't find the container with id 38ae470658c1d10403e7c3e36d8e625448c21117b8812e11394eb690ad6eae1f Oct 06 12:29:28 crc kubenswrapper[4698]: I1006 12:29:28.112797 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khhhn"] Oct 06 12:29:28 crc kubenswrapper[4698]: I1006 12:29:28.483489 4698 generic.go:334] "Generic (PLEG): container finished" podID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerID="a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5" exitCode=0 Oct 06 12:29:28 crc kubenswrapper[4698]: I1006 12:29:28.483562 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khhhn" event={"ID":"e4c205a9-fc79-4c6e-bc42-f60c4d6934df","Type":"ContainerDied","Data":"a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5"} Oct 06 12:29:28 crc kubenswrapper[4698]: I1006 12:29:28.483746 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khhhn" event={"ID":"e4c205a9-fc79-4c6e-bc42-f60c4d6934df","Type":"ContainerStarted","Data":"38ae470658c1d10403e7c3e36d8e625448c21117b8812e11394eb690ad6eae1f"} Oct 06 12:29:30 crc kubenswrapper[4698]: I1006 12:29:30.532832 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khhhn" event={"ID":"e4c205a9-fc79-4c6e-bc42-f60c4d6934df","Type":"ContainerStarted","Data":"a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec"} Oct 06 12:29:32 crc kubenswrapper[4698]: I1006 12:29:32.590075 4698 generic.go:334] "Generic (PLEG): container finished" podID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerID="a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec" exitCode=0 Oct 06 12:29:32 crc kubenswrapper[4698]: I1006 12:29:32.590421 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khhhn" event={"ID":"e4c205a9-fc79-4c6e-bc42-f60c4d6934df","Type":"ContainerDied","Data":"a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec"} Oct 06 12:29:33 crc kubenswrapper[4698]: I1006 12:29:33.605345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khhhn" event={"ID":"e4c205a9-fc79-4c6e-bc42-f60c4d6934df","Type":"ContainerStarted","Data":"6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8"} Oct 06 12:29:33 crc kubenswrapper[4698]: I1006 12:29:33.639595 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-khhhn" podStartSLOduration=2.090670438 podStartE2EDuration="6.639565277s" podCreationTimestamp="2025-10-06 12:29:27 +0000 UTC" firstStartedPulling="2025-10-06 12:29:28.485818917 +0000 UTC m=+2655.898511090" lastFinishedPulling="2025-10-06 12:29:33.034713746 +0000 UTC m=+2660.447405929" observedRunningTime="2025-10-06 12:29:33.625719164 +0000 UTC m=+2661.038411337" watchObservedRunningTime="2025-10-06 12:29:33.639565277 +0000 UTC m=+2661.052257460" Oct 06 12:29:37 crc kubenswrapper[4698]: I1006 12:29:37.560767 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:37 crc kubenswrapper[4698]: I1006 12:29:37.561530 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:38 crc kubenswrapper[4698]: I1006 12:29:38.627723 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-khhhn" podUID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerName="registry-server" probeResult="failure" output=< Oct 06 12:29:38 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 06 12:29:38 crc kubenswrapper[4698]: > Oct 06 12:29:47 crc kubenswrapper[4698]: I1006 12:29:47.646328 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:47 crc kubenswrapper[4698]: I1006 12:29:47.711761 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:47 crc kubenswrapper[4698]: I1006 12:29:47.890924 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khhhn"] Oct 06 12:29:48 crc kubenswrapper[4698]: I1006 12:29:48.768842 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-khhhn" podUID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerName="registry-server" containerID="cri-o://6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8" gracePeriod=2 Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.315403 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.412378 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-catalog-content\") pod \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.412547 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-utilities\") pod \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.412709 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbtfx\" (UniqueName: \"kubernetes.io/projected/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-kube-api-access-fbtfx\") pod \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\" (UID: \"e4c205a9-fc79-4c6e-bc42-f60c4d6934df\") " Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.414050 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-utilities" (OuterVolumeSpecName: "utilities") pod "e4c205a9-fc79-4c6e-bc42-f60c4d6934df" (UID: "e4c205a9-fc79-4c6e-bc42-f60c4d6934df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.419250 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-kube-api-access-fbtfx" (OuterVolumeSpecName: "kube-api-access-fbtfx") pod "e4c205a9-fc79-4c6e-bc42-f60c4d6934df" (UID: "e4c205a9-fc79-4c6e-bc42-f60c4d6934df"). InnerVolumeSpecName "kube-api-access-fbtfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.497324 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4c205a9-fc79-4c6e-bc42-f60c4d6934df" (UID: "e4c205a9-fc79-4c6e-bc42-f60c4d6934df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.515941 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.515985 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbtfx\" (UniqueName: \"kubernetes.io/projected/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-kube-api-access-fbtfx\") on node \"crc\" DevicePath \"\"" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.516000 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4c205a9-fc79-4c6e-bc42-f60c4d6934df-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.787902 4698 generic.go:334] "Generic (PLEG): container finished" podID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerID="6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8" exitCode=0 Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.787969 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khhhn" event={"ID":"e4c205a9-fc79-4c6e-bc42-f60c4d6934df","Type":"ContainerDied","Data":"6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8"} Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.788038 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khhhn" event={"ID":"e4c205a9-fc79-4c6e-bc42-f60c4d6934df","Type":"ContainerDied","Data":"38ae470658c1d10403e7c3e36d8e625448c21117b8812e11394eb690ad6eae1f"} Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.788067 4698 scope.go:117] "RemoveContainer" containerID="6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.788299 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khhhn" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.824699 4698 scope.go:117] "RemoveContainer" containerID="a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.862311 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khhhn"] Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.873571 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-khhhn"] Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.878681 4698 scope.go:117] "RemoveContainer" containerID="a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.921144 4698 scope.go:117] "RemoveContainer" containerID="6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8" Oct 06 12:29:49 crc kubenswrapper[4698]: E1006 12:29:49.921715 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8\": container with ID starting with 6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8 not found: ID does not exist" containerID="6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.921755 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8"} err="failed to get container status \"6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8\": rpc error: code = NotFound desc = could not find container \"6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8\": container with ID starting with 6cf68413322a6de0e28013007fb35b6244c4039a62456f0a72a03e6b9e09ada8 not found: ID does not exist" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.921780 4698 scope.go:117] "RemoveContainer" containerID="a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec" Oct 06 12:29:49 crc kubenswrapper[4698]: E1006 12:29:49.922314 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec\": container with ID starting with a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec not found: ID does not exist" containerID="a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.922354 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec"} err="failed to get container status \"a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec\": rpc error: code = NotFound desc = could not find container \"a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec\": container with ID starting with a89c573072bdac2a4ef7ad9ea172924d77e7bd2a53e19b83ebba259d3e788bec not found: ID does not exist" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.922370 4698 scope.go:117] "RemoveContainer" containerID="a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5" Oct 06 12:29:49 crc kubenswrapper[4698]: E1006 12:29:49.922783 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5\": container with ID starting with a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5 not found: ID does not exist" containerID="a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5" Oct 06 12:29:49 crc kubenswrapper[4698]: I1006 12:29:49.922805 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5"} err="failed to get container status \"a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5\": rpc error: code = NotFound desc = could not find container \"a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5\": container with ID starting with a73e09f75361098561dd5538e6224999007f8d593d1b7de0617738b4af69c6c5 not found: ID does not exist" Oct 06 12:29:51 crc kubenswrapper[4698]: I1006 12:29:51.350642 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" path="/var/lib/kubelet/pods/e4c205a9-fc79-4c6e-bc42-f60c4d6934df/volumes" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.181589 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs"] Oct 06 12:30:00 crc kubenswrapper[4698]: E1006 12:30:00.183447 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerName="extract-utilities" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.183475 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerName="extract-utilities" Oct 06 12:30:00 crc kubenswrapper[4698]: E1006 12:30:00.183506 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerName="extract-content" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.183519 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerName="extract-content" Oct 06 12:30:00 crc kubenswrapper[4698]: E1006 12:30:00.183549 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerName="registry-server" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.183565 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerName="registry-server" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.184008 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c205a9-fc79-4c6e-bc42-f60c4d6934df" containerName="registry-server" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.185252 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.190379 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.190386 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.193560 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs"] Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.293487 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v747c\" (UniqueName: \"kubernetes.io/projected/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-kube-api-access-v747c\") pod \"collect-profiles-29329230-z7hrs\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.293552 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-config-volume\") pod \"collect-profiles-29329230-z7hrs\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.293893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-secret-volume\") pod \"collect-profiles-29329230-z7hrs\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.396220 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v747c\" (UniqueName: \"kubernetes.io/projected/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-kube-api-access-v747c\") pod \"collect-profiles-29329230-z7hrs\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.396279 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-config-volume\") pod \"collect-profiles-29329230-z7hrs\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.396346 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-secret-volume\") pod \"collect-profiles-29329230-z7hrs\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.397797 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-config-volume\") pod \"collect-profiles-29329230-z7hrs\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.406567 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-secret-volume\") pod \"collect-profiles-29329230-z7hrs\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.429922 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v747c\" (UniqueName: \"kubernetes.io/projected/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-kube-api-access-v747c\") pod \"collect-profiles-29329230-z7hrs\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:00 crc kubenswrapper[4698]: I1006 12:30:00.539788 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:01 crc kubenswrapper[4698]: I1006 12:30:01.012433 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs"] Oct 06 12:30:01 crc kubenswrapper[4698]: I1006 12:30:01.937328 4698 generic.go:334] "Generic (PLEG): container finished" podID="3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc" containerID="3475ef7db1513aeb6939d24de2f074423c6bdfbfac12119656caab1bacae111c" exitCode=0 Oct 06 12:30:01 crc kubenswrapper[4698]: I1006 12:30:01.937434 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" event={"ID":"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc","Type":"ContainerDied","Data":"3475ef7db1513aeb6939d24de2f074423c6bdfbfac12119656caab1bacae111c"} Oct 06 12:30:01 crc kubenswrapper[4698]: I1006 12:30:01.937930 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" event={"ID":"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc","Type":"ContainerStarted","Data":"dcb42b76b85b8ca4a74e67197dc5b75f7e292b8d5ac980c123736afd45d0698d"} Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.342626 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.476372 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-secret-volume\") pod \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.476568 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v747c\" (UniqueName: \"kubernetes.io/projected/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-kube-api-access-v747c\") pod \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.476957 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-config-volume\") pod \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\" (UID: \"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc\") " Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.478004 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc" (UID: "3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.486792 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-kube-api-access-v747c" (OuterVolumeSpecName: "kube-api-access-v747c") pod "3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc" (UID: "3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc"). InnerVolumeSpecName "kube-api-access-v747c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.489967 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc" (UID: "3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.580319 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.580943 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.580971 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v747c\" (UniqueName: \"kubernetes.io/projected/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc-kube-api-access-v747c\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.970095 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" event={"ID":"3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc","Type":"ContainerDied","Data":"dcb42b76b85b8ca4a74e67197dc5b75f7e292b8d5ac980c123736afd45d0698d"} Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.970199 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcb42b76b85b8ca4a74e67197dc5b75f7e292b8d5ac980c123736afd45d0698d" Oct 06 12:30:03 crc kubenswrapper[4698]: I1006 12:30:03.970232 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs" Oct 06 12:30:04 crc kubenswrapper[4698]: I1006 12:30:04.441727 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9"] Oct 06 12:30:04 crc kubenswrapper[4698]: I1006 12:30:04.458323 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329185-5vkh9"] Oct 06 12:30:05 crc kubenswrapper[4698]: I1006 12:30:05.346417 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6a6c7dd-61b1-4609-a50d-bba142afd5f6" path="/var/lib/kubelet/pods/b6a6c7dd-61b1-4609-a50d-bba142afd5f6/volumes" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.035506 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dpsgz"] Oct 06 12:30:12 crc kubenswrapper[4698]: E1006 12:30:12.036902 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc" containerName="collect-profiles" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.036922 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc" containerName="collect-profiles" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.037317 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc" containerName="collect-profiles" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.039397 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.050454 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dpsgz"] Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.109298 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-catalog-content\") pod \"community-operators-dpsgz\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.109731 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-utilities\") pod \"community-operators-dpsgz\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.109938 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qzb\" (UniqueName: \"kubernetes.io/projected/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-kube-api-access-27qzb\") pod \"community-operators-dpsgz\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.213763 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-utilities\") pod \"community-operators-dpsgz\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.213830 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27qzb\" (UniqueName: \"kubernetes.io/projected/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-kube-api-access-27qzb\") pod \"community-operators-dpsgz\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.213900 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-catalog-content\") pod \"community-operators-dpsgz\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.214373 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-utilities\") pod \"community-operators-dpsgz\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.214457 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-catalog-content\") pod \"community-operators-dpsgz\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.234885 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27qzb\" (UniqueName: \"kubernetes.io/projected/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-kube-api-access-27qzb\") pod \"community-operators-dpsgz\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.379933 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:12 crc kubenswrapper[4698]: I1006 12:30:12.942479 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dpsgz"] Oct 06 12:30:12 crc kubenswrapper[4698]: W1006 12:30:12.952243 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80efd2ac_ad5c_4e8f_a597_69d8ead759ef.slice/crio-9881f97ae946c759e0f1962d6517f3875f4fb2c18ee076ac28931d2763461fdf WatchSource:0}: Error finding container 9881f97ae946c759e0f1962d6517f3875f4fb2c18ee076ac28931d2763461fdf: Status 404 returned error can't find the container with id 9881f97ae946c759e0f1962d6517f3875f4fb2c18ee076ac28931d2763461fdf Oct 06 12:30:13 crc kubenswrapper[4698]: I1006 12:30:13.114483 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpsgz" event={"ID":"80efd2ac-ad5c-4e8f-a597-69d8ead759ef","Type":"ContainerStarted","Data":"9881f97ae946c759e0f1962d6517f3875f4fb2c18ee076ac28931d2763461fdf"} Oct 06 12:30:13 crc kubenswrapper[4698]: E1006 12:30:13.426070 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80efd2ac_ad5c_4e8f_a597_69d8ead759ef.slice/crio-conmon-c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80efd2ac_ad5c_4e8f_a597_69d8ead759ef.slice/crio-c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22.scope\": RecentStats: unable to find data in memory cache]" Oct 06 12:30:14 crc kubenswrapper[4698]: I1006 12:30:14.130258 4698 generic.go:334] "Generic (PLEG): container finished" podID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerID="c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22" exitCode=0 Oct 06 12:30:14 crc kubenswrapper[4698]: I1006 12:30:14.130466 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpsgz" event={"ID":"80efd2ac-ad5c-4e8f-a597-69d8ead759ef","Type":"ContainerDied","Data":"c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22"} Oct 06 12:30:16 crc kubenswrapper[4698]: I1006 12:30:16.158647 4698 generic.go:334] "Generic (PLEG): container finished" podID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerID="ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d" exitCode=0 Oct 06 12:30:16 crc kubenswrapper[4698]: I1006 12:30:16.158759 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpsgz" event={"ID":"80efd2ac-ad5c-4e8f-a597-69d8ead759ef","Type":"ContainerDied","Data":"ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d"} Oct 06 12:30:17 crc kubenswrapper[4698]: I1006 12:30:17.181331 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpsgz" event={"ID":"80efd2ac-ad5c-4e8f-a597-69d8ead759ef","Type":"ContainerStarted","Data":"8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2"} Oct 06 12:30:17 crc kubenswrapper[4698]: I1006 12:30:17.219647 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dpsgz" podStartSLOduration=2.562577736 podStartE2EDuration="5.219610265s" podCreationTimestamp="2025-10-06 12:30:12 +0000 UTC" firstStartedPulling="2025-10-06 12:30:14.13316096 +0000 UTC m=+2701.545853133" lastFinishedPulling="2025-10-06 12:30:16.790193459 +0000 UTC m=+2704.202885662" observedRunningTime="2025-10-06 12:30:17.208195612 +0000 UTC m=+2704.620887825" watchObservedRunningTime="2025-10-06 12:30:17.219610265 +0000 UTC m=+2704.632302478" Oct 06 12:30:22 crc kubenswrapper[4698]: I1006 12:30:22.380142 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:22 crc kubenswrapper[4698]: I1006 12:30:22.380836 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:22 crc kubenswrapper[4698]: I1006 12:30:22.454031 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:23 crc kubenswrapper[4698]: I1006 12:30:23.345819 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:23 crc kubenswrapper[4698]: I1006 12:30:23.417066 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dpsgz"] Oct 06 12:30:25 crc kubenswrapper[4698]: I1006 12:30:25.301217 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dpsgz" podUID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerName="registry-server" containerID="cri-o://8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2" gracePeriod=2 Oct 06 12:30:25 crc kubenswrapper[4698]: I1006 12:30:25.873493 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:25 crc kubenswrapper[4698]: I1006 12:30:25.974792 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-catalog-content\") pod \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " Oct 06 12:30:25 crc kubenswrapper[4698]: I1006 12:30:25.975379 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-utilities\") pod \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " Oct 06 12:30:25 crc kubenswrapper[4698]: I1006 12:30:25.976108 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27qzb\" (UniqueName: \"kubernetes.io/projected/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-kube-api-access-27qzb\") pod \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\" (UID: \"80efd2ac-ad5c-4e8f-a597-69d8ead759ef\") " Oct 06 12:30:25 crc kubenswrapper[4698]: I1006 12:30:25.976539 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-utilities" (OuterVolumeSpecName: "utilities") pod "80efd2ac-ad5c-4e8f-a597-69d8ead759ef" (UID: "80efd2ac-ad5c-4e8f-a597-69d8ead759ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:30:25 crc kubenswrapper[4698]: I1006 12:30:25.977100 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:25 crc kubenswrapper[4698]: I1006 12:30:25.985674 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-kube-api-access-27qzb" (OuterVolumeSpecName: "kube-api-access-27qzb") pod "80efd2ac-ad5c-4e8f-a597-69d8ead759ef" (UID: "80efd2ac-ad5c-4e8f-a597-69d8ead759ef"). InnerVolumeSpecName "kube-api-access-27qzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.079005 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27qzb\" (UniqueName: \"kubernetes.io/projected/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-kube-api-access-27qzb\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.248044 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80efd2ac-ad5c-4e8f-a597-69d8ead759ef" (UID: "80efd2ac-ad5c-4e8f-a597-69d8ead759ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.288263 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80efd2ac-ad5c-4e8f-a597-69d8ead759ef-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.317360 4698 generic.go:334] "Generic (PLEG): container finished" podID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerID="8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2" exitCode=0 Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.317444 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpsgz" event={"ID":"80efd2ac-ad5c-4e8f-a597-69d8ead759ef","Type":"ContainerDied","Data":"8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2"} Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.317505 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dpsgz" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.317561 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpsgz" event={"ID":"80efd2ac-ad5c-4e8f-a597-69d8ead759ef","Type":"ContainerDied","Data":"9881f97ae946c759e0f1962d6517f3875f4fb2c18ee076ac28931d2763461fdf"} Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.317602 4698 scope.go:117] "RemoveContainer" containerID="8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.353327 4698 scope.go:117] "RemoveContainer" containerID="ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.378428 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dpsgz"] Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.389955 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dpsgz"] Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.397736 4698 scope.go:117] "RemoveContainer" containerID="c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.472766 4698 scope.go:117] "RemoveContainer" containerID="8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2" Oct 06 12:30:26 crc kubenswrapper[4698]: E1006 12:30:26.474831 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2\": container with ID starting with 8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2 not found: ID does not exist" containerID="8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.474912 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2"} err="failed to get container status \"8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2\": rpc error: code = NotFound desc = could not find container \"8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2\": container with ID starting with 8370a81eabc4ecefc9b077d0b7f7500377e702d805eb7d1e8762bd95e5dff5b2 not found: ID does not exist" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.474957 4698 scope.go:117] "RemoveContainer" containerID="ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d" Oct 06 12:30:26 crc kubenswrapper[4698]: E1006 12:30:26.476400 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d\": container with ID starting with ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d not found: ID does not exist" containerID="ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.476442 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d"} err="failed to get container status \"ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d\": rpc error: code = NotFound desc = could not find container \"ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d\": container with ID starting with ddc9601f0b7ad13f751c4578fa393fefc8584ccecc4e0b93286dc765956e0a9d not found: ID does not exist" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.476466 4698 scope.go:117] "RemoveContainer" containerID="c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22" Oct 06 12:30:26 crc kubenswrapper[4698]: E1006 12:30:26.477846 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22\": container with ID starting with c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22 not found: ID does not exist" containerID="c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22" Oct 06 12:30:26 crc kubenswrapper[4698]: I1006 12:30:26.477883 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22"} err="failed to get container status \"c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22\": rpc error: code = NotFound desc = could not find container \"c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22\": container with ID starting with c4b9768a1b7c1ac7e6162ac8e8a84e441ddd2192392d852b058405b956c2ae22 not found: ID does not exist" Oct 06 12:30:27 crc kubenswrapper[4698]: I1006 12:30:27.348447 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" path="/var/lib/kubelet/pods/80efd2ac-ad5c-4e8f-a597-69d8ead759ef/volumes" Oct 06 12:30:43 crc kubenswrapper[4698]: I1006 12:30:43.397713 4698 scope.go:117] "RemoveContainer" containerID="aa9fa273109015c2db531f14de4d03dd9dd501a3327d64d38d54177a2d88fa72" Oct 06 12:30:56 crc kubenswrapper[4698]: I1006 12:30:56.741393 4698 generic.go:334] "Generic (PLEG): container finished" podID="9853ba7c-85b2-4a97-ac8c-80be3f979248" containerID="6963d751425092dd2a2b283a41f7447ede0aff59c70ddbc2f5b2881ffe6b9f71" exitCode=0 Oct 06 12:30:56 crc kubenswrapper[4698]: I1006 12:30:56.741468 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" event={"ID":"9853ba7c-85b2-4a97-ac8c-80be3f979248","Type":"ContainerDied","Data":"6963d751425092dd2a2b283a41f7447ede0aff59c70ddbc2f5b2881ffe6b9f71"} Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.303621 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.499364 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtl6r\" (UniqueName: \"kubernetes.io/projected/9853ba7c-85b2-4a97-ac8c-80be3f979248-kube-api-access-gtl6r\") pod \"9853ba7c-85b2-4a97-ac8c-80be3f979248\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.499434 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-extra-config-0\") pod \"9853ba7c-85b2-4a97-ac8c-80be3f979248\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.499474 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-1\") pod \"9853ba7c-85b2-4a97-ac8c-80be3f979248\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.499582 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-1\") pod \"9853ba7c-85b2-4a97-ac8c-80be3f979248\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.499611 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-combined-ca-bundle\") pod \"9853ba7c-85b2-4a97-ac8c-80be3f979248\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.499718 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-0\") pod \"9853ba7c-85b2-4a97-ac8c-80be3f979248\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.499790 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-0\") pod \"9853ba7c-85b2-4a97-ac8c-80be3f979248\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.499851 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-ssh-key\") pod \"9853ba7c-85b2-4a97-ac8c-80be3f979248\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.499920 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-inventory\") pod \"9853ba7c-85b2-4a97-ac8c-80be3f979248\" (UID: \"9853ba7c-85b2-4a97-ac8c-80be3f979248\") " Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.508528 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9853ba7c-85b2-4a97-ac8c-80be3f979248" (UID: "9853ba7c-85b2-4a97-ac8c-80be3f979248"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.520378 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9853ba7c-85b2-4a97-ac8c-80be3f979248-kube-api-access-gtl6r" (OuterVolumeSpecName: "kube-api-access-gtl6r") pod "9853ba7c-85b2-4a97-ac8c-80be3f979248" (UID: "9853ba7c-85b2-4a97-ac8c-80be3f979248"). InnerVolumeSpecName "kube-api-access-gtl6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.535033 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "9853ba7c-85b2-4a97-ac8c-80be3f979248" (UID: "9853ba7c-85b2-4a97-ac8c-80be3f979248"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.546721 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "9853ba7c-85b2-4a97-ac8c-80be3f979248" (UID: "9853ba7c-85b2-4a97-ac8c-80be3f979248"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.552344 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9853ba7c-85b2-4a97-ac8c-80be3f979248" (UID: "9853ba7c-85b2-4a97-ac8c-80be3f979248"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.552449 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "9853ba7c-85b2-4a97-ac8c-80be3f979248" (UID: "9853ba7c-85b2-4a97-ac8c-80be3f979248"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.563592 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "9853ba7c-85b2-4a97-ac8c-80be3f979248" (UID: "9853ba7c-85b2-4a97-ac8c-80be3f979248"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.567712 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-inventory" (OuterVolumeSpecName: "inventory") pod "9853ba7c-85b2-4a97-ac8c-80be3f979248" (UID: "9853ba7c-85b2-4a97-ac8c-80be3f979248"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.568123 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "9853ba7c-85b2-4a97-ac8c-80be3f979248" (UID: "9853ba7c-85b2-4a97-ac8c-80be3f979248"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.603056 4698 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.603418 4698 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.603430 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.603439 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.603449 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtl6r\" (UniqueName: \"kubernetes.io/projected/9853ba7c-85b2-4a97-ac8c-80be3f979248-kube-api-access-gtl6r\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.603460 4698 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.603471 4698 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.603481 4698 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.603493 4698 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9853ba7c-85b2-4a97-ac8c-80be3f979248-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.763437 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" event={"ID":"9853ba7c-85b2-4a97-ac8c-80be3f979248","Type":"ContainerDied","Data":"84eaedd7ec03c2ef0fbeff67376d57a29eda006a45f4c1cbedc59ef0b704f49f"} Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.763487 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84eaedd7ec03c2ef0fbeff67376d57a29eda006a45f4c1cbedc59ef0b704f49f" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.763551 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-786mf" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.896158 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62"] Oct 06 12:30:58 crc kubenswrapper[4698]: E1006 12:30:58.896901 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerName="registry-server" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.896941 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerName="registry-server" Oct 06 12:30:58 crc kubenswrapper[4698]: E1006 12:30:58.896976 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9853ba7c-85b2-4a97-ac8c-80be3f979248" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.896990 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9853ba7c-85b2-4a97-ac8c-80be3f979248" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 06 12:30:58 crc kubenswrapper[4698]: E1006 12:30:58.897051 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerName="extract-utilities" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.897065 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerName="extract-utilities" Oct 06 12:30:58 crc kubenswrapper[4698]: E1006 12:30:58.897115 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerName="extract-content" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.897130 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerName="extract-content" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.897536 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9853ba7c-85b2-4a97-ac8c-80be3f979248" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.897602 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="80efd2ac-ad5c-4e8f-a597-69d8ead759ef" containerName="registry-server" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.898991 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.902275 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2j94" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.902461 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.902643 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.902821 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.903086 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 12:30:58 crc kubenswrapper[4698]: I1006 12:30:58.920728 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62"] Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.011325 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.011424 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qlf7\" (UniqueName: \"kubernetes.io/projected/ff7ed42f-2288-48ac-9f89-9305e2f4a151-kube-api-access-9qlf7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.011507 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.011561 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.011636 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.011687 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.011738 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.113893 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.114148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.114228 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.114321 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.114437 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.114506 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qlf7\" (UniqueName: \"kubernetes.io/projected/ff7ed42f-2288-48ac-9f89-9305e2f4a151-kube-api-access-9qlf7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.114568 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.123072 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.123955 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.123984 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.125951 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.130922 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.141333 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.153560 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qlf7\" (UniqueName: \"kubernetes.io/projected/ff7ed42f-2288-48ac-9f89-9305e2f4a151-kube-api-access-9qlf7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d5d62\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.225527 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:30:59 crc kubenswrapper[4698]: I1006 12:30:59.878298 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62"] Oct 06 12:31:00 crc kubenswrapper[4698]: I1006 12:31:00.792206 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" event={"ID":"ff7ed42f-2288-48ac-9f89-9305e2f4a151","Type":"ContainerStarted","Data":"c2af3d821006987bb5f7b40ce4de6448a61bba952562ec608f3ed399a6670ab0"} Oct 06 12:31:01 crc kubenswrapper[4698]: I1006 12:31:01.812069 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" event={"ID":"ff7ed42f-2288-48ac-9f89-9305e2f4a151","Type":"ContainerStarted","Data":"19af9ec9694af3fda4eb3f2065e31035d20e51d95da434839a6bbf7996f3bf1f"} Oct 06 12:31:25 crc kubenswrapper[4698]: I1006 12:31:25.235561 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:31:25 crc kubenswrapper[4698]: I1006 12:31:25.236478 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:31:55 crc kubenswrapper[4698]: I1006 12:31:55.235537 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:31:55 crc kubenswrapper[4698]: I1006 12:31:55.236657 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:32:25 crc kubenswrapper[4698]: I1006 12:32:25.235580 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:32:25 crc kubenswrapper[4698]: I1006 12:32:25.236501 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:32:25 crc kubenswrapper[4698]: I1006 12:32:25.236582 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:32:25 crc kubenswrapper[4698]: I1006 12:32:25.237973 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:32:25 crc kubenswrapper[4698]: I1006 12:32:25.238063 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" gracePeriod=600 Oct 06 12:32:25 crc kubenswrapper[4698]: E1006 12:32:25.375514 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:32:25 crc kubenswrapper[4698]: I1006 12:32:25.939792 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" exitCode=0 Oct 06 12:32:25 crc kubenswrapper[4698]: I1006 12:32:25.939907 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e"} Oct 06 12:32:25 crc kubenswrapper[4698]: I1006 12:32:25.940343 4698 scope.go:117] "RemoveContainer" containerID="6101d87b92f52608ccd8fec10e65d5aed0bfe3b24f6ea8cb7bdf04a66e58dd94" Oct 06 12:32:25 crc kubenswrapper[4698]: I1006 12:32:25.941857 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:32:25 crc kubenswrapper[4698]: E1006 12:32:25.942290 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:32:25 crc kubenswrapper[4698]: I1006 12:32:25.966560 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" podStartSLOduration=87.196150547 podStartE2EDuration="1m27.966530688s" podCreationTimestamp="2025-10-06 12:30:58 +0000 UTC" firstStartedPulling="2025-10-06 12:30:59.898149558 +0000 UTC m=+2747.310841771" lastFinishedPulling="2025-10-06 12:31:00.668529709 +0000 UTC m=+2748.081221912" observedRunningTime="2025-10-06 12:31:01.843283085 +0000 UTC m=+2749.255975268" watchObservedRunningTime="2025-10-06 12:32:25.966530688 +0000 UTC m=+2833.379222871" Oct 06 12:32:40 crc kubenswrapper[4698]: I1006 12:32:40.329661 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:32:40 crc kubenswrapper[4698]: E1006 12:32:40.330886 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:32:55 crc kubenswrapper[4698]: I1006 12:32:55.329087 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:32:55 crc kubenswrapper[4698]: E1006 12:32:55.330088 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:33:08 crc kubenswrapper[4698]: I1006 12:33:08.330577 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:33:08 crc kubenswrapper[4698]: E1006 12:33:08.331872 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:33:20 crc kubenswrapper[4698]: I1006 12:33:20.328782 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:33:20 crc kubenswrapper[4698]: E1006 12:33:20.329664 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:33:33 crc kubenswrapper[4698]: I1006 12:33:33.344493 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:33:33 crc kubenswrapper[4698]: E1006 12:33:33.346557 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:33:41 crc kubenswrapper[4698]: I1006 12:33:41.980952 4698 generic.go:334] "Generic (PLEG): container finished" podID="ff7ed42f-2288-48ac-9f89-9305e2f4a151" containerID="19af9ec9694af3fda4eb3f2065e31035d20e51d95da434839a6bbf7996f3bf1f" exitCode=0 Oct 06 12:33:41 crc kubenswrapper[4698]: I1006 12:33:41.981084 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" event={"ID":"ff7ed42f-2288-48ac-9f89-9305e2f4a151","Type":"ContainerDied","Data":"19af9ec9694af3fda4eb3f2065e31035d20e51d95da434839a6bbf7996f3bf1f"} Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.518554 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.637380 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-0\") pod \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.637521 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qlf7\" (UniqueName: \"kubernetes.io/projected/ff7ed42f-2288-48ac-9f89-9305e2f4a151-kube-api-access-9qlf7\") pod \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.637596 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-telemetry-combined-ca-bundle\") pod \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.637616 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ssh-key\") pod \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.637703 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-inventory\") pod \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.637746 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-2\") pod \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.637808 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-1\") pod \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\" (UID: \"ff7ed42f-2288-48ac-9f89-9305e2f4a151\") " Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.686048 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7ed42f-2288-48ac-9f89-9305e2f4a151-kube-api-access-9qlf7" (OuterVolumeSpecName: "kube-api-access-9qlf7") pod "ff7ed42f-2288-48ac-9f89-9305e2f4a151" (UID: "ff7ed42f-2288-48ac-9f89-9305e2f4a151"). InnerVolumeSpecName "kube-api-access-9qlf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.686643 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ff7ed42f-2288-48ac-9f89-9305e2f4a151" (UID: "ff7ed42f-2288-48ac-9f89-9305e2f4a151"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.693548 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-inventory" (OuterVolumeSpecName: "inventory") pod "ff7ed42f-2288-48ac-9f89-9305e2f4a151" (UID: "ff7ed42f-2288-48ac-9f89-9305e2f4a151"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.698619 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ff7ed42f-2288-48ac-9f89-9305e2f4a151" (UID: "ff7ed42f-2288-48ac-9f89-9305e2f4a151"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.699397 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ff7ed42f-2288-48ac-9f89-9305e2f4a151" (UID: "ff7ed42f-2288-48ac-9f89-9305e2f4a151"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.719940 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ff7ed42f-2288-48ac-9f89-9305e2f4a151" (UID: "ff7ed42f-2288-48ac-9f89-9305e2f4a151"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.727691 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ff7ed42f-2288-48ac-9f89-9305e2f4a151" (UID: "ff7ed42f-2288-48ac-9f89-9305e2f4a151"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.740169 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qlf7\" (UniqueName: \"kubernetes.io/projected/ff7ed42f-2288-48ac-9f89-9305e2f4a151-kube-api-access-9qlf7\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.740211 4698 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.740228 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.740241 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.740253 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.740265 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:43 crc kubenswrapper[4698]: I1006 12:33:43.740274 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ff7ed42f-2288-48ac-9f89-9305e2f4a151-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:33:44 crc kubenswrapper[4698]: I1006 12:33:44.014783 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" event={"ID":"ff7ed42f-2288-48ac-9f89-9305e2f4a151","Type":"ContainerDied","Data":"c2af3d821006987bb5f7b40ce4de6448a61bba952562ec608f3ed399a6670ab0"} Oct 06 12:33:44 crc kubenswrapper[4698]: I1006 12:33:44.014847 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2af3d821006987bb5f7b40ce4de6448a61bba952562ec608f3ed399a6670ab0" Oct 06 12:33:44 crc kubenswrapper[4698]: I1006 12:33:44.014984 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d5d62" Oct 06 12:33:45 crc kubenswrapper[4698]: I1006 12:33:45.331949 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:33:45 crc kubenswrapper[4698]: E1006 12:33:45.332906 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:33:58 crc kubenswrapper[4698]: I1006 12:33:58.329858 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:33:58 crc kubenswrapper[4698]: E1006 12:33:58.331458 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:34:12 crc kubenswrapper[4698]: I1006 12:34:12.330503 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:34:12 crc kubenswrapper[4698]: E1006 12:34:12.332514 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:34:20 crc kubenswrapper[4698]: I1006 12:34:20.289208 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:34:20 crc kubenswrapper[4698]: I1006 12:34:20.290444 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="prometheus" containerID="cri-o://f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5" gracePeriod=600 Oct 06 12:34:20 crc kubenswrapper[4698]: I1006 12:34:20.290571 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="thanos-sidecar" containerID="cri-o://3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202" gracePeriod=600 Oct 06 12:34:20 crc kubenswrapper[4698]: I1006 12:34:20.290633 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="config-reloader" containerID="cri-o://59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7" gracePeriod=600 Oct 06 12:34:20 crc kubenswrapper[4698]: I1006 12:34:20.600941 4698 generic.go:334] "Generic (PLEG): container finished" podID="40388b3e-433c-484a-b0aa-c7e427601657" containerID="3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202" exitCode=0 Oct 06 12:34:20 crc kubenswrapper[4698]: I1006 12:34:20.600985 4698 generic.go:334] "Generic (PLEG): container finished" podID="40388b3e-433c-484a-b0aa-c7e427601657" containerID="f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5" exitCode=0 Oct 06 12:34:20 crc kubenswrapper[4698]: I1006 12:34:20.601006 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40388b3e-433c-484a-b0aa-c7e427601657","Type":"ContainerDied","Data":"3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202"} Oct 06 12:34:20 crc kubenswrapper[4698]: I1006 12:34:20.601053 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40388b3e-433c-484a-b0aa-c7e427601657","Type":"ContainerDied","Data":"f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5"} Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.379128 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.494249 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.494806 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-thanos-prometheus-http-client-file\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.494840 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-secret-combined-ca-bundle\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.494921 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8778x\" (UniqueName: \"kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-kube-api-access-8778x\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.494952 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-tls-assets\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.494998 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.495176 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/40388b3e-433c-484a-b0aa-c7e427601657-prometheus-metric-storage-rulefiles-0\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.495208 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.495253 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40388b3e-433c-484a-b0aa-c7e427601657-config-out\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.495313 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-config\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.495337 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"40388b3e-433c-484a-b0aa-c7e427601657\" (UID: \"40388b3e-433c-484a-b0aa-c7e427601657\") " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.504758 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.506037 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40388b3e-433c-484a-b0aa-c7e427601657-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.513787 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.513852 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.513974 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-kube-api-access-8778x" (OuterVolumeSpecName: "kube-api-access-8778x") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "kube-api-access-8778x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.514105 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.514641 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40388b3e-433c-484a-b0aa-c7e427601657-config-out" (OuterVolumeSpecName: "config-out") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.532861 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-config" (OuterVolumeSpecName: "config") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.537497 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.553796 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.597533 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") on node \"crc\" " Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.597568 4698 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.597581 4698 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.597592 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8778x\" (UniqueName: \"kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-kube-api-access-8778x\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.597601 4698 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40388b3e-433c-484a-b0aa-c7e427601657-tls-assets\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.597612 4698 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.597625 4698 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/40388b3e-433c-484a-b0aa-c7e427601657-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.597635 4698 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40388b3e-433c-484a-b0aa-c7e427601657-config-out\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.597645 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.597655 4698 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.610655 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config" (OuterVolumeSpecName: "web-config") pod "40388b3e-433c-484a-b0aa-c7e427601657" (UID: "40388b3e-433c-484a-b0aa-c7e427601657"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.632203 4698 generic.go:334] "Generic (PLEG): container finished" podID="40388b3e-433c-484a-b0aa-c7e427601657" containerID="59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7" exitCode=0 Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.632597 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40388b3e-433c-484a-b0aa-c7e427601657","Type":"ContainerDied","Data":"59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7"} Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.632711 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"40388b3e-433c-484a-b0aa-c7e427601657","Type":"ContainerDied","Data":"b6bb40befec29c115adbb0ae40163fe414800721827817dde5fc42ee96d60b55"} Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.632793 4698 scope.go:117] "RemoveContainer" containerID="3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.633226 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.635883 4698 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.636115 4698 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0") on node "crc" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.688547 4698 scope.go:117] "RemoveContainer" containerID="59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.700676 4698 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40388b3e-433c-484a-b0aa-c7e427601657-web-config\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.700720 4698 reconciler_common.go:293] "Volume detached for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") on node \"crc\" DevicePath \"\"" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.714168 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.717934 4698 scope.go:117] "RemoveContainer" containerID="f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.733348 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.757376 4698 scope.go:117] "RemoveContainer" containerID="faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.759958 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:34:21 crc kubenswrapper[4698]: E1006 12:34:21.760522 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="thanos-sidecar" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.760544 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="thanos-sidecar" Oct 06 12:34:21 crc kubenswrapper[4698]: E1006 12:34:21.760555 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="prometheus" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.760568 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="prometheus" Oct 06 12:34:21 crc kubenswrapper[4698]: E1006 12:34:21.760616 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7ed42f-2288-48ac-9f89-9305e2f4a151" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.760623 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7ed42f-2288-48ac-9f89-9305e2f4a151" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 06 12:34:21 crc kubenswrapper[4698]: E1006 12:34:21.760637 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="init-config-reloader" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.760643 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="init-config-reloader" Oct 06 12:34:21 crc kubenswrapper[4698]: E1006 12:34:21.760658 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="config-reloader" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.760665 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="config-reloader" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.760898 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="prometheus" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.760922 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="config-reloader" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.760935 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7ed42f-2288-48ac-9f89-9305e2f4a151" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.760954 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="40388b3e-433c-484a-b0aa-c7e427601657" containerName="thanos-sidecar" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.763005 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.767305 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.768241 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.768273 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.768327 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-plfqm" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.768273 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.771151 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.777677 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.805917 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-config\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.806055 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.806167 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.806221 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.806264 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.806357 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.806393 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t8zn\" (UniqueName: \"kubernetes.io/projected/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-kube-api-access-5t8zn\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.806425 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.806468 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.806506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.806575 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.809074 4698 scope.go:117] "RemoveContainer" containerID="3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202" Oct 06 12:34:21 crc kubenswrapper[4698]: E1006 12:34:21.809679 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202\": container with ID starting with 3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202 not found: ID does not exist" containerID="3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.809721 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202"} err="failed to get container status \"3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202\": rpc error: code = NotFound desc = could not find container \"3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202\": container with ID starting with 3ab7d2f67f22d4f4784b234da9e7a96896474429c92ae6f4eddfcaaa786db202 not found: ID does not exist" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.809750 4698 scope.go:117] "RemoveContainer" containerID="59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7" Oct 06 12:34:21 crc kubenswrapper[4698]: E1006 12:34:21.809987 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7\": container with ID starting with 59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7 not found: ID does not exist" containerID="59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.810026 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7"} err="failed to get container status \"59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7\": rpc error: code = NotFound desc = could not find container \"59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7\": container with ID starting with 59d7c7b77b4fbd024ff17a4bcff60dff2efe58b72f13fd479caa911d2c138fd7 not found: ID does not exist" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.810043 4698 scope.go:117] "RemoveContainer" containerID="f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5" Oct 06 12:34:21 crc kubenswrapper[4698]: E1006 12:34:21.810356 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5\": container with ID starting with f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5 not found: ID does not exist" containerID="f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.810399 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5"} err="failed to get container status \"f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5\": rpc error: code = NotFound desc = could not find container \"f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5\": container with ID starting with f69f5142711c75d163b58cd0b9b471d6d9e42379ce5866949c385438829d29d5 not found: ID does not exist" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.810432 4698 scope.go:117] "RemoveContainer" containerID="faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2" Oct 06 12:34:21 crc kubenswrapper[4698]: E1006 12:34:21.810746 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2\": container with ID starting with faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2 not found: ID does not exist" containerID="faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.810778 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2"} err="failed to get container status \"faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2\": rpc error: code = NotFound desc = could not find container \"faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2\": container with ID starting with faa48116046c7b89875e266cbe74f33c4774259d79330bee5479ab0cef7e8ee2 not found: ID does not exist" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.908729 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.908843 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.908894 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.908914 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.908990 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.909030 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t8zn\" (UniqueName: \"kubernetes.io/projected/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-kube-api-access-5t8zn\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.909052 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.909077 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.909114 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.909158 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.909326 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-config\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.910727 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.915567 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.919427 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.919510 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/44d562176676e8d8573ee8cf6c79a771697a87ae1dbf01fea0ea1f08f9081a45/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.919546 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.919918 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.921723 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.921374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.925166 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.926370 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-config\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.928216 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t8zn\" (UniqueName: \"kubernetes.io/projected/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-kube-api-access-5t8zn\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.938135 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/34fdded6-e8a1-4564-bd6a-9ed17c9e57b5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:21 crc kubenswrapper[4698]: I1006 12:34:21.987111 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb8da665-dabe-4ee0-8885-e3b350bdd8a0\") pod \"prometheus-metric-storage-0\" (UID: \"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5\") " pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:22 crc kubenswrapper[4698]: I1006 12:34:22.123544 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:22 crc kubenswrapper[4698]: I1006 12:34:22.653991 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 06 12:34:23 crc kubenswrapper[4698]: I1006 12:34:23.358160 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40388b3e-433c-484a-b0aa-c7e427601657" path="/var/lib/kubelet/pods/40388b3e-433c-484a-b0aa-c7e427601657/volumes" Oct 06 12:34:23 crc kubenswrapper[4698]: I1006 12:34:23.683795 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5","Type":"ContainerStarted","Data":"01606dbf7707ec2e0d663af66911979f702b335c9130e948a6d6fdb02e4eeaed"} Oct 06 12:34:25 crc kubenswrapper[4698]: I1006 12:34:25.330097 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:34:25 crc kubenswrapper[4698]: E1006 12:34:25.331145 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:34:28 crc kubenswrapper[4698]: I1006 12:34:28.771234 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5","Type":"ContainerStarted","Data":"30a1a7a637b114fd1ab463ffdb7448a2528c92317b853fcf2a99ffc6415d7b40"} Oct 06 12:34:39 crc kubenswrapper[4698]: I1006 12:34:39.950425 4698 generic.go:334] "Generic (PLEG): container finished" podID="34fdded6-e8a1-4564-bd6a-9ed17c9e57b5" containerID="30a1a7a637b114fd1ab463ffdb7448a2528c92317b853fcf2a99ffc6415d7b40" exitCode=0 Oct 06 12:34:39 crc kubenswrapper[4698]: I1006 12:34:39.950530 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5","Type":"ContainerDied","Data":"30a1a7a637b114fd1ab463ffdb7448a2528c92317b853fcf2a99ffc6415d7b40"} Oct 06 12:34:40 crc kubenswrapper[4698]: I1006 12:34:40.332930 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:34:40 crc kubenswrapper[4698]: E1006 12:34:40.333859 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:34:40 crc kubenswrapper[4698]: I1006 12:34:40.966583 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5","Type":"ContainerStarted","Data":"a3c9bd6abf34915f1b153410a259c28d4d84788aa2133a36b8b860818a7e5c71"} Oct 06 12:34:46 crc kubenswrapper[4698]: I1006 12:34:46.043428 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5","Type":"ContainerStarted","Data":"d450bc054a3af59d895bbef9313e1faf948ca9471948595a8b35de1ad81fa766"} Oct 06 12:34:47 crc kubenswrapper[4698]: I1006 12:34:47.063054 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34fdded6-e8a1-4564-bd6a-9ed17c9e57b5","Type":"ContainerStarted","Data":"eade5f9e2735c3955d61c5d3c5453705451b51fe8b0ee6e8216a974ef490915b"} Oct 06 12:34:47 crc kubenswrapper[4698]: I1006 12:34:47.120849 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.120815577 podStartE2EDuration="26.120815577s" podCreationTimestamp="2025-10-06 12:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 12:34:47.115483394 +0000 UTC m=+2974.528175647" watchObservedRunningTime="2025-10-06 12:34:47.120815577 +0000 UTC m=+2974.533507790" Oct 06 12:34:47 crc kubenswrapper[4698]: I1006 12:34:47.128106 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:51 crc kubenswrapper[4698]: I1006 12:34:51.330864 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:34:51 crc kubenswrapper[4698]: E1006 12:34:51.332469 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:34:52 crc kubenswrapper[4698]: I1006 12:34:52.125257 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:52 crc kubenswrapper[4698]: I1006 12:34:52.162313 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 06 12:34:52 crc kubenswrapper[4698]: I1006 12:34:52.170222 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 06 12:35:03 crc kubenswrapper[4698]: I1006 12:35:03.345461 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:35:03 crc kubenswrapper[4698]: E1006 12:35:03.346737 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:35:14 crc kubenswrapper[4698]: I1006 12:35:14.329945 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:35:14 crc kubenswrapper[4698]: E1006 12:35:14.331424 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:35:17 crc kubenswrapper[4698]: I1006 12:35:17.858413 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 06 12:35:17 crc kubenswrapper[4698]: I1006 12:35:17.861743 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 06 12:35:17 crc kubenswrapper[4698]: I1006 12:35:17.865344 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 06 12:35:17 crc kubenswrapper[4698]: I1006 12:35:17.865630 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pnjbc" Oct 06 12:35:17 crc kubenswrapper[4698]: I1006 12:35:17.866004 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 06 12:35:17 crc kubenswrapper[4698]: I1006 12:35:17.866100 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 06 12:35:17 crc kubenswrapper[4698]: I1006 12:35:17.881234 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.032346 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.033205 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.033271 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.033337 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.033455 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-config-data\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.033693 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.034265 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.034352 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.034516 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvbrz\" (UniqueName: \"kubernetes.io/projected/06bf5456-72f4-4eee-a851-c943572e317b-kube-api-access-dvbrz\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.137504 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.137616 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.137687 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-config-data\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.137758 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.137913 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.137956 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.138058 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvbrz\" (UniqueName: \"kubernetes.io/projected/06bf5456-72f4-4eee-a851-c943572e317b-kube-api-access-dvbrz\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.138154 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.138208 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.138368 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.138945 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.139678 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.140389 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.147931 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.148518 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-config-data\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.153385 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.155671 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.178974 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvbrz\" (UniqueName: \"kubernetes.io/projected/06bf5456-72f4-4eee-a851-c943572e317b-kube-api-access-dvbrz\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.199298 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.207477 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.750125 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 06 12:35:18 crc kubenswrapper[4698]: I1006 12:35:18.770768 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:35:19 crc kubenswrapper[4698]: I1006 12:35:19.537419 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"06bf5456-72f4-4eee-a851-c943572e317b","Type":"ContainerStarted","Data":"b27d1e49115de7e57d9b4eb73579c4023942773c7890675249eb1620cdc46007"} Oct 06 12:35:28 crc kubenswrapper[4698]: I1006 12:35:28.332708 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:35:28 crc kubenswrapper[4698]: E1006 12:35:28.333805 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:35:31 crc kubenswrapper[4698]: I1006 12:35:31.727702 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"06bf5456-72f4-4eee-a851-c943572e317b","Type":"ContainerStarted","Data":"5f388c1fb15d05514598328d361e1400198852517a8659b06e98baa8d45ed414"} Oct 06 12:35:31 crc kubenswrapper[4698]: I1006 12:35:31.765135 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.034786952 podStartE2EDuration="15.765104534s" podCreationTimestamp="2025-10-06 12:35:16 +0000 UTC" firstStartedPulling="2025-10-06 12:35:18.770303267 +0000 UTC m=+3006.182995480" lastFinishedPulling="2025-10-06 12:35:30.500620849 +0000 UTC m=+3017.913313062" observedRunningTime="2025-10-06 12:35:31.752602314 +0000 UTC m=+3019.165294517" watchObservedRunningTime="2025-10-06 12:35:31.765104534 +0000 UTC m=+3019.177796707" Oct 06 12:35:41 crc kubenswrapper[4698]: I1006 12:35:41.328811 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:35:41 crc kubenswrapper[4698]: E1006 12:35:41.329989 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:35:55 crc kubenswrapper[4698]: I1006 12:35:55.329333 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:35:55 crc kubenswrapper[4698]: E1006 12:35:55.333305 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:36:10 crc kubenswrapper[4698]: I1006 12:36:10.330270 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:36:10 crc kubenswrapper[4698]: E1006 12:36:10.331731 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:36:24 crc kubenswrapper[4698]: I1006 12:36:24.329518 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:36:24 crc kubenswrapper[4698]: E1006 12:36:24.330747 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:36:36 crc kubenswrapper[4698]: I1006 12:36:36.330413 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:36:36 crc kubenswrapper[4698]: E1006 12:36:36.332058 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:36:47 crc kubenswrapper[4698]: I1006 12:36:47.330414 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:36:47 crc kubenswrapper[4698]: E1006 12:36:47.331863 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.010579 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jfj8p"] Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.013938 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.035845 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfj8p"] Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.050573 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-utilities\") pod \"redhat-marketplace-jfj8p\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.050707 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-catalog-content\") pod \"redhat-marketplace-jfj8p\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.050784 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4pkb\" (UniqueName: \"kubernetes.io/projected/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-kube-api-access-s4pkb\") pod \"redhat-marketplace-jfj8p\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.152290 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4pkb\" (UniqueName: \"kubernetes.io/projected/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-kube-api-access-s4pkb\") pod \"redhat-marketplace-jfj8p\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.152467 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-utilities\") pod \"redhat-marketplace-jfj8p\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.152545 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-catalog-content\") pod \"redhat-marketplace-jfj8p\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.153183 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-catalog-content\") pod \"redhat-marketplace-jfj8p\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.153187 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-utilities\") pod \"redhat-marketplace-jfj8p\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.190962 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4pkb\" (UniqueName: \"kubernetes.io/projected/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-kube-api-access-s4pkb\") pod \"redhat-marketplace-jfj8p\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.353199 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.854519 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfj8p"] Oct 06 12:36:54 crc kubenswrapper[4698]: I1006 12:36:54.926044 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfj8p" event={"ID":"f36e2995-c68c-458f-abbc-36f9e2dcf8bf","Type":"ContainerStarted","Data":"f8be4211fa945aeea8bc75f631d929797b324de6c02258cd56a632af0fea51fc"} Oct 06 12:36:55 crc kubenswrapper[4698]: I1006 12:36:55.942577 4698 generic.go:334] "Generic (PLEG): container finished" podID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerID="b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25" exitCode=0 Oct 06 12:36:55 crc kubenswrapper[4698]: I1006 12:36:55.942684 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfj8p" event={"ID":"f36e2995-c68c-458f-abbc-36f9e2dcf8bf","Type":"ContainerDied","Data":"b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25"} Oct 06 12:36:57 crc kubenswrapper[4698]: I1006 12:36:57.969624 4698 generic.go:334] "Generic (PLEG): container finished" podID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerID="9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6" exitCode=0 Oct 06 12:36:57 crc kubenswrapper[4698]: I1006 12:36:57.969721 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfj8p" event={"ID":"f36e2995-c68c-458f-abbc-36f9e2dcf8bf","Type":"ContainerDied","Data":"9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6"} Oct 06 12:36:58 crc kubenswrapper[4698]: I1006 12:36:58.329136 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:36:58 crc kubenswrapper[4698]: E1006 12:36:58.329417 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:36:58 crc kubenswrapper[4698]: I1006 12:36:58.984550 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfj8p" event={"ID":"f36e2995-c68c-458f-abbc-36f9e2dcf8bf","Type":"ContainerStarted","Data":"5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded"} Oct 06 12:36:59 crc kubenswrapper[4698]: I1006 12:36:59.017830 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jfj8p" podStartSLOduration=3.5681720009999998 podStartE2EDuration="6.017803246s" podCreationTimestamp="2025-10-06 12:36:53 +0000 UTC" firstStartedPulling="2025-10-06 12:36:55.946579803 +0000 UTC m=+3103.359271986" lastFinishedPulling="2025-10-06 12:36:58.396211048 +0000 UTC m=+3105.808903231" observedRunningTime="2025-10-06 12:36:59.005284276 +0000 UTC m=+3106.417976459" watchObservedRunningTime="2025-10-06 12:36:59.017803246 +0000 UTC m=+3106.430495419" Oct 06 12:37:04 crc kubenswrapper[4698]: I1006 12:37:04.354232 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:37:04 crc kubenswrapper[4698]: I1006 12:37:04.356457 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:37:04 crc kubenswrapper[4698]: I1006 12:37:04.454208 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:37:05 crc kubenswrapper[4698]: I1006 12:37:05.139367 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:37:05 crc kubenswrapper[4698]: I1006 12:37:05.251079 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfj8p"] Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.096953 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jfj8p" podUID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerName="registry-server" containerID="cri-o://5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded" gracePeriod=2 Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.690445 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.742858 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-catalog-content\") pod \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.743253 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4pkb\" (UniqueName: \"kubernetes.io/projected/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-kube-api-access-s4pkb\") pod \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.743388 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-utilities\") pod \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\" (UID: \"f36e2995-c68c-458f-abbc-36f9e2dcf8bf\") " Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.745303 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-utilities" (OuterVolumeSpecName: "utilities") pod "f36e2995-c68c-458f-abbc-36f9e2dcf8bf" (UID: "f36e2995-c68c-458f-abbc-36f9e2dcf8bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.755341 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-kube-api-access-s4pkb" (OuterVolumeSpecName: "kube-api-access-s4pkb") pod "f36e2995-c68c-458f-abbc-36f9e2dcf8bf" (UID: "f36e2995-c68c-458f-abbc-36f9e2dcf8bf"). InnerVolumeSpecName "kube-api-access-s4pkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.774459 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f36e2995-c68c-458f-abbc-36f9e2dcf8bf" (UID: "f36e2995-c68c-458f-abbc-36f9e2dcf8bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.847382 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.847433 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:37:07 crc kubenswrapper[4698]: I1006 12:37:07.847454 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4pkb\" (UniqueName: \"kubernetes.io/projected/f36e2995-c68c-458f-abbc-36f9e2dcf8bf-kube-api-access-s4pkb\") on node \"crc\" DevicePath \"\"" Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.115409 4698 generic.go:334] "Generic (PLEG): container finished" podID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerID="5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded" exitCode=0 Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.115514 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfj8p" event={"ID":"f36e2995-c68c-458f-abbc-36f9e2dcf8bf","Type":"ContainerDied","Data":"5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded"} Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.115533 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jfj8p" Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.117368 4698 scope.go:117] "RemoveContainer" containerID="5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded" Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.117228 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfj8p" event={"ID":"f36e2995-c68c-458f-abbc-36f9e2dcf8bf","Type":"ContainerDied","Data":"f8be4211fa945aeea8bc75f631d929797b324de6c02258cd56a632af0fea51fc"} Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.177984 4698 scope.go:117] "RemoveContainer" containerID="9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6" Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.185911 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfj8p"] Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.197339 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfj8p"] Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.229598 4698 scope.go:117] "RemoveContainer" containerID="b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25" Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.301767 4698 scope.go:117] "RemoveContainer" containerID="5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded" Oct 06 12:37:08 crc kubenswrapper[4698]: E1006 12:37:08.302659 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded\": container with ID starting with 5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded not found: ID does not exist" containerID="5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded" Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.302722 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded"} err="failed to get container status \"5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded\": rpc error: code = NotFound desc = could not find container \"5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded\": container with ID starting with 5e5fa3fefcc4a9129de1ca8759a64d852ff9f97eaa7f26e186b8927d3fd1aded not found: ID does not exist" Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.302763 4698 scope.go:117] "RemoveContainer" containerID="9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6" Oct 06 12:37:08 crc kubenswrapper[4698]: E1006 12:37:08.304302 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6\": container with ID starting with 9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6 not found: ID does not exist" containerID="9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6" Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.304393 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6"} err="failed to get container status \"9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6\": rpc error: code = NotFound desc = could not find container \"9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6\": container with ID starting with 9effb4210e1a9f7e0ba208e02cc0ba14d3c2851c93ea16c21e1c6b8569a5b2d6 not found: ID does not exist" Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.304455 4698 scope.go:117] "RemoveContainer" containerID="b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25" Oct 06 12:37:08 crc kubenswrapper[4698]: E1006 12:37:08.305125 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25\": container with ID starting with b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25 not found: ID does not exist" containerID="b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25" Oct 06 12:37:08 crc kubenswrapper[4698]: I1006 12:37:08.305162 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25"} err="failed to get container status \"b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25\": rpc error: code = NotFound desc = could not find container \"b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25\": container with ID starting with b0b957296d0142d4d5851610247685d23997bc3bbe33c25b460f674a505eec25 not found: ID does not exist" Oct 06 12:37:09 crc kubenswrapper[4698]: I1006 12:37:09.352344 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" path="/var/lib/kubelet/pods/f36e2995-c68c-458f-abbc-36f9e2dcf8bf/volumes" Oct 06 12:37:11 crc kubenswrapper[4698]: I1006 12:37:11.330413 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:37:11 crc kubenswrapper[4698]: E1006 12:37:11.331221 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:37:23 crc kubenswrapper[4698]: I1006 12:37:23.340786 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:37:23 crc kubenswrapper[4698]: E1006 12:37:23.342182 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:37:38 crc kubenswrapper[4698]: I1006 12:37:38.328959 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:37:39 crc kubenswrapper[4698]: I1006 12:37:39.628899 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"23b26c6fa878f7db1dfb64f5a5429b7f7b3a04d592e96c8eeea33d2c2e3f4f23"} Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.142283 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k95l6"] Oct 06 12:38:41 crc kubenswrapper[4698]: E1006 12:38:41.144177 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerName="registry-server" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.144210 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerName="registry-server" Oct 06 12:38:41 crc kubenswrapper[4698]: E1006 12:38:41.144243 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerName="extract-content" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.144258 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerName="extract-content" Oct 06 12:38:41 crc kubenswrapper[4698]: E1006 12:38:41.144317 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerName="extract-utilities" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.144334 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerName="extract-utilities" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.144755 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f36e2995-c68c-458f-abbc-36f9e2dcf8bf" containerName="registry-server" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.147550 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.160447 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k95l6"] Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.250550 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-utilities\") pod \"certified-operators-k95l6\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.250648 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-catalog-content\") pod \"certified-operators-k95l6\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.250733 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6x25\" (UniqueName: \"kubernetes.io/projected/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-kube-api-access-p6x25\") pod \"certified-operators-k95l6\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.354662 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6x25\" (UniqueName: \"kubernetes.io/projected/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-kube-api-access-p6x25\") pod \"certified-operators-k95l6\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.354766 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-utilities\") pod \"certified-operators-k95l6\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.354810 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-catalog-content\") pod \"certified-operators-k95l6\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.355363 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-catalog-content\") pod \"certified-operators-k95l6\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.361275 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-utilities\") pod \"certified-operators-k95l6\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.389767 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6x25\" (UniqueName: \"kubernetes.io/projected/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-kube-api-access-p6x25\") pod \"certified-operators-k95l6\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:41 crc kubenswrapper[4698]: I1006 12:38:41.480218 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:42 crc kubenswrapper[4698]: I1006 12:38:42.051307 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k95l6"] Oct 06 12:38:42 crc kubenswrapper[4698]: I1006 12:38:42.510058 4698 generic.go:334] "Generic (PLEG): container finished" podID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerID="19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af" exitCode=0 Oct 06 12:38:42 crc kubenswrapper[4698]: I1006 12:38:42.510384 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k95l6" event={"ID":"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d","Type":"ContainerDied","Data":"19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af"} Oct 06 12:38:42 crc kubenswrapper[4698]: I1006 12:38:42.510536 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k95l6" event={"ID":"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d","Type":"ContainerStarted","Data":"820f8c6a4be3f2d9ec576821f405839a1f587118617c368c3bc20b8e721e91b7"} Oct 06 12:38:44 crc kubenswrapper[4698]: I1006 12:38:44.540250 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k95l6" event={"ID":"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d","Type":"ContainerStarted","Data":"e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e"} Oct 06 12:38:46 crc kubenswrapper[4698]: I1006 12:38:46.580946 4698 generic.go:334] "Generic (PLEG): container finished" podID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerID="e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e" exitCode=0 Oct 06 12:38:46 crc kubenswrapper[4698]: I1006 12:38:46.581079 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k95l6" event={"ID":"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d","Type":"ContainerDied","Data":"e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e"} Oct 06 12:38:47 crc kubenswrapper[4698]: I1006 12:38:47.598511 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k95l6" event={"ID":"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d","Type":"ContainerStarted","Data":"85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca"} Oct 06 12:38:47 crc kubenswrapper[4698]: I1006 12:38:47.630066 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k95l6" podStartSLOduration=2.099097643 podStartE2EDuration="6.630041199s" podCreationTimestamp="2025-10-06 12:38:41 +0000 UTC" firstStartedPulling="2025-10-06 12:38:42.512629748 +0000 UTC m=+3209.925321941" lastFinishedPulling="2025-10-06 12:38:47.043573324 +0000 UTC m=+3214.456265497" observedRunningTime="2025-10-06 12:38:47.62559849 +0000 UTC m=+3215.038290693" watchObservedRunningTime="2025-10-06 12:38:47.630041199 +0000 UTC m=+3215.042733382" Oct 06 12:38:51 crc kubenswrapper[4698]: I1006 12:38:51.481767 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:51 crc kubenswrapper[4698]: I1006 12:38:51.489891 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:51 crc kubenswrapper[4698]: I1006 12:38:51.568567 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:52 crc kubenswrapper[4698]: I1006 12:38:52.752798 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:52 crc kubenswrapper[4698]: I1006 12:38:52.822002 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k95l6"] Oct 06 12:38:54 crc kubenswrapper[4698]: I1006 12:38:54.702868 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k95l6" podUID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerName="registry-server" containerID="cri-o://85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca" gracePeriod=2 Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.226684 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.384081 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6x25\" (UniqueName: \"kubernetes.io/projected/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-kube-api-access-p6x25\") pod \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.384390 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-catalog-content\") pod \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.384590 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-utilities\") pod \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\" (UID: \"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d\") " Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.385890 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-utilities" (OuterVolumeSpecName: "utilities") pod "1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" (UID: "1e1a5e60-9e89-4cc2-99cd-af5de4e9788d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.395654 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-kube-api-access-p6x25" (OuterVolumeSpecName: "kube-api-access-p6x25") pod "1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" (UID: "1e1a5e60-9e89-4cc2-99cd-af5de4e9788d"). InnerVolumeSpecName "kube-api-access-p6x25". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.449746 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" (UID: "1e1a5e60-9e89-4cc2-99cd-af5de4e9788d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.487796 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.487840 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.487853 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6x25\" (UniqueName: \"kubernetes.io/projected/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d-kube-api-access-p6x25\") on node \"crc\" DevicePath \"\"" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.720110 4698 generic.go:334] "Generic (PLEG): container finished" podID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerID="85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca" exitCode=0 Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.720209 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k95l6" event={"ID":"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d","Type":"ContainerDied","Data":"85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca"} Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.720302 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k95l6" event={"ID":"1e1a5e60-9e89-4cc2-99cd-af5de4e9788d","Type":"ContainerDied","Data":"820f8c6a4be3f2d9ec576821f405839a1f587118617c368c3bc20b8e721e91b7"} Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.720353 4698 scope.go:117] "RemoveContainer" containerID="85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.720430 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k95l6" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.780616 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k95l6"] Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.787066 4698 scope.go:117] "RemoveContainer" containerID="e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.797806 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k95l6"] Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.823165 4698 scope.go:117] "RemoveContainer" containerID="19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.861437 4698 scope.go:117] "RemoveContainer" containerID="85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca" Oct 06 12:38:55 crc kubenswrapper[4698]: E1006 12:38:55.862262 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca\": container with ID starting with 85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca not found: ID does not exist" containerID="85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.862325 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca"} err="failed to get container status \"85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca\": rpc error: code = NotFound desc = could not find container \"85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca\": container with ID starting with 85a1f49c7bd78144309ed94d9593a2c3022cbb8538f287799a4e0d32221187ca not found: ID does not exist" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.862351 4698 scope.go:117] "RemoveContainer" containerID="e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e" Oct 06 12:38:55 crc kubenswrapper[4698]: E1006 12:38:55.862931 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e\": container with ID starting with e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e not found: ID does not exist" containerID="e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.862961 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e"} err="failed to get container status \"e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e\": rpc error: code = NotFound desc = could not find container \"e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e\": container with ID starting with e537cd5874593764f2e4bb9b661be15a73efd9b0620df1fc42ddebbc55dfe30e not found: ID does not exist" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.862977 4698 scope.go:117] "RemoveContainer" containerID="19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af" Oct 06 12:38:55 crc kubenswrapper[4698]: E1006 12:38:55.863538 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af\": container with ID starting with 19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af not found: ID does not exist" containerID="19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af" Oct 06 12:38:55 crc kubenswrapper[4698]: I1006 12:38:55.863564 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af"} err="failed to get container status \"19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af\": rpc error: code = NotFound desc = could not find container \"19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af\": container with ID starting with 19c91ec2ce614c577b948a07af53741812376eaa6df278c4119043f44fdb84af not found: ID does not exist" Oct 06 12:38:57 crc kubenswrapper[4698]: I1006 12:38:57.351348 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" path="/var/lib/kubelet/pods/1e1a5e60-9e89-4cc2-99cd-af5de4e9788d/volumes" Oct 06 12:39:50 crc kubenswrapper[4698]: I1006 12:39:50.829988 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9lh8f"] Oct 06 12:39:50 crc kubenswrapper[4698]: E1006 12:39:50.831562 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerName="registry-server" Oct 06 12:39:50 crc kubenswrapper[4698]: I1006 12:39:50.831586 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerName="registry-server" Oct 06 12:39:50 crc kubenswrapper[4698]: E1006 12:39:50.831609 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerName="extract-content" Oct 06 12:39:50 crc kubenswrapper[4698]: I1006 12:39:50.831621 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerName="extract-content" Oct 06 12:39:50 crc kubenswrapper[4698]: E1006 12:39:50.831685 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerName="extract-utilities" Oct 06 12:39:50 crc kubenswrapper[4698]: I1006 12:39:50.831699 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerName="extract-utilities" Oct 06 12:39:50 crc kubenswrapper[4698]: I1006 12:39:50.832099 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e1a5e60-9e89-4cc2-99cd-af5de4e9788d" containerName="registry-server" Oct 06 12:39:50 crc kubenswrapper[4698]: I1006 12:39:50.838859 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:50 crc kubenswrapper[4698]: I1006 12:39:50.857444 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lh8f"] Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.015529 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8dl2\" (UniqueName: \"kubernetes.io/projected/55db269d-6be2-4855-9062-6cd10a9f7e85-kube-api-access-l8dl2\") pod \"redhat-operators-9lh8f\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.015699 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-utilities\") pod \"redhat-operators-9lh8f\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.015728 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-catalog-content\") pod \"redhat-operators-9lh8f\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.118263 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8dl2\" (UniqueName: \"kubernetes.io/projected/55db269d-6be2-4855-9062-6cd10a9f7e85-kube-api-access-l8dl2\") pod \"redhat-operators-9lh8f\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.119001 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-utilities\") pod \"redhat-operators-9lh8f\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.119641 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-catalog-content\") pod \"redhat-operators-9lh8f\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.119586 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-utilities\") pod \"redhat-operators-9lh8f\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.120001 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-catalog-content\") pod \"redhat-operators-9lh8f\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.144758 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8dl2\" (UniqueName: \"kubernetes.io/projected/55db269d-6be2-4855-9062-6cd10a9f7e85-kube-api-access-l8dl2\") pod \"redhat-operators-9lh8f\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.183607 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:39:51 crc kubenswrapper[4698]: I1006 12:39:51.728573 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lh8f"] Oct 06 12:39:52 crc kubenswrapper[4698]: I1006 12:39:52.492965 4698 generic.go:334] "Generic (PLEG): container finished" podID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerID="f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035" exitCode=0 Oct 06 12:39:52 crc kubenswrapper[4698]: I1006 12:39:52.493139 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lh8f" event={"ID":"55db269d-6be2-4855-9062-6cd10a9f7e85","Type":"ContainerDied","Data":"f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035"} Oct 06 12:39:52 crc kubenswrapper[4698]: I1006 12:39:52.493455 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lh8f" event={"ID":"55db269d-6be2-4855-9062-6cd10a9f7e85","Type":"ContainerStarted","Data":"a9547df4fe5d1bd76f0ebddbd87b95ef544f6ec9ae2c420c4a30e8c43cd03db0"} Oct 06 12:39:54 crc kubenswrapper[4698]: I1006 12:39:54.527403 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lh8f" event={"ID":"55db269d-6be2-4855-9062-6cd10a9f7e85","Type":"ContainerStarted","Data":"198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092"} Oct 06 12:39:55 crc kubenswrapper[4698]: I1006 12:39:55.235563 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:39:55 crc kubenswrapper[4698]: I1006 12:39:55.236456 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:39:57 crc kubenswrapper[4698]: I1006 12:39:57.582472 4698 generic.go:334] "Generic (PLEG): container finished" podID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerID="198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092" exitCode=0 Oct 06 12:39:57 crc kubenswrapper[4698]: I1006 12:39:57.582578 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lh8f" event={"ID":"55db269d-6be2-4855-9062-6cd10a9f7e85","Type":"ContainerDied","Data":"198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092"} Oct 06 12:39:58 crc kubenswrapper[4698]: I1006 12:39:58.611205 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lh8f" event={"ID":"55db269d-6be2-4855-9062-6cd10a9f7e85","Type":"ContainerStarted","Data":"dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679"} Oct 06 12:39:58 crc kubenswrapper[4698]: I1006 12:39:58.648643 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9lh8f" podStartSLOduration=2.873533349 podStartE2EDuration="8.648614346s" podCreationTimestamp="2025-10-06 12:39:50 +0000 UTC" firstStartedPulling="2025-10-06 12:39:52.495882088 +0000 UTC m=+3279.908574261" lastFinishedPulling="2025-10-06 12:39:58.270963085 +0000 UTC m=+3285.683655258" observedRunningTime="2025-10-06 12:39:58.638674261 +0000 UTC m=+3286.051366454" watchObservedRunningTime="2025-10-06 12:39:58.648614346 +0000 UTC m=+3286.061306509" Oct 06 12:40:01 crc kubenswrapper[4698]: I1006 12:40:01.184463 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:40:01 crc kubenswrapper[4698]: I1006 12:40:01.184649 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:40:02 crc kubenswrapper[4698]: I1006 12:40:02.272147 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9lh8f" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerName="registry-server" probeResult="failure" output=< Oct 06 12:40:02 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 06 12:40:02 crc kubenswrapper[4698]: > Oct 06 12:40:12 crc kubenswrapper[4698]: I1006 12:40:12.243911 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9lh8f" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerName="registry-server" probeResult="failure" output=< Oct 06 12:40:12 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 06 12:40:12 crc kubenswrapper[4698]: > Oct 06 12:40:21 crc kubenswrapper[4698]: I1006 12:40:21.250374 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:40:21 crc kubenswrapper[4698]: I1006 12:40:21.310456 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:40:22 crc kubenswrapper[4698]: I1006 12:40:22.024378 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lh8f"] Oct 06 12:40:22 crc kubenswrapper[4698]: I1006 12:40:22.959973 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9lh8f" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerName="registry-server" containerID="cri-o://dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679" gracePeriod=2 Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.631563 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.810512 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8dl2\" (UniqueName: \"kubernetes.io/projected/55db269d-6be2-4855-9062-6cd10a9f7e85-kube-api-access-l8dl2\") pod \"55db269d-6be2-4855-9062-6cd10a9f7e85\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.810631 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-utilities\") pod \"55db269d-6be2-4855-9062-6cd10a9f7e85\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.810713 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-catalog-content\") pod \"55db269d-6be2-4855-9062-6cd10a9f7e85\" (UID: \"55db269d-6be2-4855-9062-6cd10a9f7e85\") " Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.811932 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-utilities" (OuterVolumeSpecName: "utilities") pod "55db269d-6be2-4855-9062-6cd10a9f7e85" (UID: "55db269d-6be2-4855-9062-6cd10a9f7e85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.818321 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55db269d-6be2-4855-9062-6cd10a9f7e85-kube-api-access-l8dl2" (OuterVolumeSpecName: "kube-api-access-l8dl2") pod "55db269d-6be2-4855-9062-6cd10a9f7e85" (UID: "55db269d-6be2-4855-9062-6cd10a9f7e85"). InnerVolumeSpecName "kube-api-access-l8dl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.905916 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55db269d-6be2-4855-9062-6cd10a9f7e85" (UID: "55db269d-6be2-4855-9062-6cd10a9f7e85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.913645 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8dl2\" (UniqueName: \"kubernetes.io/projected/55db269d-6be2-4855-9062-6cd10a9f7e85-kube-api-access-l8dl2\") on node \"crc\" DevicePath \"\"" Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.913695 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.913707 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55db269d-6be2-4855-9062-6cd10a9f7e85-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.974400 4698 generic.go:334] "Generic (PLEG): container finished" podID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerID="dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679" exitCode=0 Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.974451 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lh8f" event={"ID":"55db269d-6be2-4855-9062-6cd10a9f7e85","Type":"ContainerDied","Data":"dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679"} Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.974488 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lh8f" event={"ID":"55db269d-6be2-4855-9062-6cd10a9f7e85","Type":"ContainerDied","Data":"a9547df4fe5d1bd76f0ebddbd87b95ef544f6ec9ae2c420c4a30e8c43cd03db0"} Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.974515 4698 scope.go:117] "RemoveContainer" containerID="dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679" Oct 06 12:40:23 crc kubenswrapper[4698]: I1006 12:40:23.974538 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lh8f" Oct 06 12:40:24 crc kubenswrapper[4698]: I1006 12:40:24.019666 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lh8f"] Oct 06 12:40:24 crc kubenswrapper[4698]: I1006 12:40:24.019868 4698 scope.go:117] "RemoveContainer" containerID="198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092" Oct 06 12:40:24 crc kubenswrapper[4698]: I1006 12:40:24.031823 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9lh8f"] Oct 06 12:40:24 crc kubenswrapper[4698]: I1006 12:40:24.060042 4698 scope.go:117] "RemoveContainer" containerID="f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035" Oct 06 12:40:24 crc kubenswrapper[4698]: I1006 12:40:24.095855 4698 scope.go:117] "RemoveContainer" containerID="dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679" Oct 06 12:40:24 crc kubenswrapper[4698]: E1006 12:40:24.100126 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679\": container with ID starting with dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679 not found: ID does not exist" containerID="dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679" Oct 06 12:40:24 crc kubenswrapper[4698]: I1006 12:40:24.100184 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679"} err="failed to get container status \"dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679\": rpc error: code = NotFound desc = could not find container \"dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679\": container with ID starting with dab9be8daa4cc6a9a32b38fc9ecd0131aab7370996c736d5d6ec41644d5f3679 not found: ID does not exist" Oct 06 12:40:24 crc kubenswrapper[4698]: I1006 12:40:24.100212 4698 scope.go:117] "RemoveContainer" containerID="198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092" Oct 06 12:40:24 crc kubenswrapper[4698]: E1006 12:40:24.100828 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092\": container with ID starting with 198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092 not found: ID does not exist" containerID="198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092" Oct 06 12:40:24 crc kubenswrapper[4698]: I1006 12:40:24.100897 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092"} err="failed to get container status \"198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092\": rpc error: code = NotFound desc = could not find container \"198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092\": container with ID starting with 198bf65be60ce715926c68b5676f6916a194aa3e9c05907f986ba35f8f952092 not found: ID does not exist" Oct 06 12:40:24 crc kubenswrapper[4698]: I1006 12:40:24.100942 4698 scope.go:117] "RemoveContainer" containerID="f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035" Oct 06 12:40:24 crc kubenswrapper[4698]: E1006 12:40:24.101353 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035\": container with ID starting with f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035 not found: ID does not exist" containerID="f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035" Oct 06 12:40:24 crc kubenswrapper[4698]: I1006 12:40:24.101389 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035"} err="failed to get container status \"f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035\": rpc error: code = NotFound desc = could not find container \"f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035\": container with ID starting with f135a227e1e4d0c40c0fb108699222b1596c5e92b0da37ce555714505df67035 not found: ID does not exist" Oct 06 12:40:25 crc kubenswrapper[4698]: I1006 12:40:25.235577 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:40:25 crc kubenswrapper[4698]: I1006 12:40:25.237923 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:40:25 crc kubenswrapper[4698]: I1006 12:40:25.350920 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" path="/var/lib/kubelet/pods/55db269d-6be2-4855-9062-6cd10a9f7e85/volumes" Oct 06 12:40:55 crc kubenswrapper[4698]: I1006 12:40:55.235158 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:40:55 crc kubenswrapper[4698]: I1006 12:40:55.236705 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:40:55 crc kubenswrapper[4698]: I1006 12:40:55.236818 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:40:55 crc kubenswrapper[4698]: I1006 12:40:55.238827 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23b26c6fa878f7db1dfb64f5a5429b7f7b3a04d592e96c8eeea33d2c2e3f4f23"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:40:55 crc kubenswrapper[4698]: I1006 12:40:55.238983 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://23b26c6fa878f7db1dfb64f5a5429b7f7b3a04d592e96c8eeea33d2c2e3f4f23" gracePeriod=600 Oct 06 12:40:55 crc kubenswrapper[4698]: I1006 12:40:55.435135 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="23b26c6fa878f7db1dfb64f5a5429b7f7b3a04d592e96c8eeea33d2c2e3f4f23" exitCode=0 Oct 06 12:40:55 crc kubenswrapper[4698]: I1006 12:40:55.435208 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"23b26c6fa878f7db1dfb64f5a5429b7f7b3a04d592e96c8eeea33d2c2e3f4f23"} Oct 06 12:40:55 crc kubenswrapper[4698]: I1006 12:40:55.436102 4698 scope.go:117] "RemoveContainer" containerID="ac98bfab4177cca5791f50ca84b24cc526d09c241f0130b6002b1149beb8ec1e" Oct 06 12:40:56 crc kubenswrapper[4698]: I1006 12:40:56.453837 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72"} Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.290897 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2hm6f"] Oct 06 12:41:03 crc kubenswrapper[4698]: E1006 12:41:03.295499 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerName="extract-content" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.295541 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerName="extract-content" Oct 06 12:41:03 crc kubenswrapper[4698]: E1006 12:41:03.295639 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerName="registry-server" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.295650 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerName="registry-server" Oct 06 12:41:03 crc kubenswrapper[4698]: E1006 12:41:03.295687 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerName="extract-utilities" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.295699 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerName="extract-utilities" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.296149 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="55db269d-6be2-4855-9062-6cd10a9f7e85" containerName="registry-server" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.300006 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.372703 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hm6f"] Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.389286 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-catalog-content\") pod \"community-operators-2hm6f\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.389745 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-utilities\") pod \"community-operators-2hm6f\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.392328 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgf4\" (UniqueName: \"kubernetes.io/projected/8119c8e5-4c78-4200-b240-7c2318319809-kube-api-access-mlgf4\") pod \"community-operators-2hm6f\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.494780 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlgf4\" (UniqueName: \"kubernetes.io/projected/8119c8e5-4c78-4200-b240-7c2318319809-kube-api-access-mlgf4\") pod \"community-operators-2hm6f\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.494971 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-catalog-content\") pod \"community-operators-2hm6f\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.495056 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-utilities\") pod \"community-operators-2hm6f\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.495633 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-utilities\") pod \"community-operators-2hm6f\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.495785 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-catalog-content\") pod \"community-operators-2hm6f\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.519687 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlgf4\" (UniqueName: \"kubernetes.io/projected/8119c8e5-4c78-4200-b240-7c2318319809-kube-api-access-mlgf4\") pod \"community-operators-2hm6f\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:03 crc kubenswrapper[4698]: I1006 12:41:03.665460 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:04 crc kubenswrapper[4698]: I1006 12:41:04.231602 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hm6f"] Oct 06 12:41:04 crc kubenswrapper[4698]: I1006 12:41:04.564901 4698 generic.go:334] "Generic (PLEG): container finished" podID="8119c8e5-4c78-4200-b240-7c2318319809" containerID="f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d" exitCode=0 Oct 06 12:41:04 crc kubenswrapper[4698]: I1006 12:41:04.564964 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hm6f" event={"ID":"8119c8e5-4c78-4200-b240-7c2318319809","Type":"ContainerDied","Data":"f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d"} Oct 06 12:41:04 crc kubenswrapper[4698]: I1006 12:41:04.565000 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hm6f" event={"ID":"8119c8e5-4c78-4200-b240-7c2318319809","Type":"ContainerStarted","Data":"308e73c5113ff34b423a2ef1a4375ece3dba29b32883ed69d32c845366653f76"} Oct 06 12:41:04 crc kubenswrapper[4698]: I1006 12:41:04.568680 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:41:06 crc kubenswrapper[4698]: I1006 12:41:06.605056 4698 generic.go:334] "Generic (PLEG): container finished" podID="8119c8e5-4c78-4200-b240-7c2318319809" containerID="001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a" exitCode=0 Oct 06 12:41:06 crc kubenswrapper[4698]: I1006 12:41:06.605507 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hm6f" event={"ID":"8119c8e5-4c78-4200-b240-7c2318319809","Type":"ContainerDied","Data":"001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a"} Oct 06 12:41:08 crc kubenswrapper[4698]: I1006 12:41:08.649830 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hm6f" event={"ID":"8119c8e5-4c78-4200-b240-7c2318319809","Type":"ContainerStarted","Data":"ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34"} Oct 06 12:41:08 crc kubenswrapper[4698]: I1006 12:41:08.673096 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2hm6f" podStartSLOduration=2.7878173779999997 podStartE2EDuration="5.673074522s" podCreationTimestamp="2025-10-06 12:41:03 +0000 UTC" firstStartedPulling="2025-10-06 12:41:04.568408124 +0000 UTC m=+3351.981100297" lastFinishedPulling="2025-10-06 12:41:07.453665238 +0000 UTC m=+3354.866357441" observedRunningTime="2025-10-06 12:41:08.669190476 +0000 UTC m=+3356.081882649" watchObservedRunningTime="2025-10-06 12:41:08.673074522 +0000 UTC m=+3356.085766695" Oct 06 12:41:13 crc kubenswrapper[4698]: I1006 12:41:13.665820 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:13 crc kubenswrapper[4698]: I1006 12:41:13.666758 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:13 crc kubenswrapper[4698]: I1006 12:41:13.730797 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:13 crc kubenswrapper[4698]: I1006 12:41:13.790068 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:13 crc kubenswrapper[4698]: I1006 12:41:13.984539 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hm6f"] Oct 06 12:41:15 crc kubenswrapper[4698]: I1006 12:41:15.736809 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2hm6f" podUID="8119c8e5-4c78-4200-b240-7c2318319809" containerName="registry-server" containerID="cri-o://ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34" gracePeriod=2 Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.424466 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.484992 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-catalog-content\") pod \"8119c8e5-4c78-4200-b240-7c2318319809\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.485183 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlgf4\" (UniqueName: \"kubernetes.io/projected/8119c8e5-4c78-4200-b240-7c2318319809-kube-api-access-mlgf4\") pod \"8119c8e5-4c78-4200-b240-7c2318319809\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.485300 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-utilities\") pod \"8119c8e5-4c78-4200-b240-7c2318319809\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.487381 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-utilities" (OuterVolumeSpecName: "utilities") pod "8119c8e5-4c78-4200-b240-7c2318319809" (UID: "8119c8e5-4c78-4200-b240-7c2318319809"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.513582 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8119c8e5-4c78-4200-b240-7c2318319809-kube-api-access-mlgf4" (OuterVolumeSpecName: "kube-api-access-mlgf4") pod "8119c8e5-4c78-4200-b240-7c2318319809" (UID: "8119c8e5-4c78-4200-b240-7c2318319809"). InnerVolumeSpecName "kube-api-access-mlgf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.588928 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlgf4\" (UniqueName: \"kubernetes.io/projected/8119c8e5-4c78-4200-b240-7c2318319809-kube-api-access-mlgf4\") on node \"crc\" DevicePath \"\"" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.588972 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.754191 4698 generic.go:334] "Generic (PLEG): container finished" podID="8119c8e5-4c78-4200-b240-7c2318319809" containerID="ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34" exitCode=0 Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.754257 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hm6f" event={"ID":"8119c8e5-4c78-4200-b240-7c2318319809","Type":"ContainerDied","Data":"ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34"} Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.754321 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hm6f" event={"ID":"8119c8e5-4c78-4200-b240-7c2318319809","Type":"ContainerDied","Data":"308e73c5113ff34b423a2ef1a4375ece3dba29b32883ed69d32c845366653f76"} Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.754347 4698 scope.go:117] "RemoveContainer" containerID="ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.754524 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hm6f" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.780743 4698 scope.go:117] "RemoveContainer" containerID="001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.794517 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8119c8e5-4c78-4200-b240-7c2318319809" (UID: "8119c8e5-4c78-4200-b240-7c2318319809"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.796394 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-catalog-content\") pod \"8119c8e5-4c78-4200-b240-7c2318319809\" (UID: \"8119c8e5-4c78-4200-b240-7c2318319809\") " Oct 06 12:41:16 crc kubenswrapper[4698]: W1006 12:41:16.797433 4698 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8119c8e5-4c78-4200-b240-7c2318319809/volumes/kubernetes.io~empty-dir/catalog-content Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.797459 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8119c8e5-4c78-4200-b240-7c2318319809" (UID: "8119c8e5-4c78-4200-b240-7c2318319809"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.819676 4698 scope.go:117] "RemoveContainer" containerID="f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.861735 4698 scope.go:117] "RemoveContainer" containerID="ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34" Oct 06 12:41:16 crc kubenswrapper[4698]: E1006 12:41:16.862937 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34\": container with ID starting with ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34 not found: ID does not exist" containerID="ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.862984 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34"} err="failed to get container status \"ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34\": rpc error: code = NotFound desc = could not find container \"ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34\": container with ID starting with ea9be6817620ed5a1dc9efe4d38793eca3860434bf9e84fd54f1ef7c88d46f34 not found: ID does not exist" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.863040 4698 scope.go:117] "RemoveContainer" containerID="001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a" Oct 06 12:41:16 crc kubenswrapper[4698]: E1006 12:41:16.863796 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a\": container with ID starting with 001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a not found: ID does not exist" containerID="001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.863836 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a"} err="failed to get container status \"001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a\": rpc error: code = NotFound desc = could not find container \"001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a\": container with ID starting with 001b15a38e73d5510de88ef217b074b740ebcb795c24612bc09176fa1465399a not found: ID does not exist" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.863857 4698 scope.go:117] "RemoveContainer" containerID="f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d" Oct 06 12:41:16 crc kubenswrapper[4698]: E1006 12:41:16.865163 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d\": container with ID starting with f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d not found: ID does not exist" containerID="f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.865198 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d"} err="failed to get container status \"f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d\": rpc error: code = NotFound desc = could not find container \"f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d\": container with ID starting with f2c239376c7654a7b40f03af393d5ffbf8ad23124f3b8610c6fa7ecff16ac30d not found: ID does not exist" Oct 06 12:41:16 crc kubenswrapper[4698]: I1006 12:41:16.900158 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8119c8e5-4c78-4200-b240-7c2318319809-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:41:17 crc kubenswrapper[4698]: I1006 12:41:17.098887 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hm6f"] Oct 06 12:41:17 crc kubenswrapper[4698]: I1006 12:41:17.113105 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2hm6f"] Oct 06 12:41:17 crc kubenswrapper[4698]: I1006 12:41:17.362343 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8119c8e5-4c78-4200-b240-7c2318319809" path="/var/lib/kubelet/pods/8119c8e5-4c78-4200-b240-7c2318319809/volumes" Oct 06 12:42:55 crc kubenswrapper[4698]: I1006 12:42:55.235454 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:42:55 crc kubenswrapper[4698]: I1006 12:42:55.236670 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:43:25 crc kubenswrapper[4698]: I1006 12:43:25.235425 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:43:25 crc kubenswrapper[4698]: I1006 12:43:25.236115 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:43:55 crc kubenswrapper[4698]: I1006 12:43:55.235226 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:43:55 crc kubenswrapper[4698]: I1006 12:43:55.237058 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:43:55 crc kubenswrapper[4698]: I1006 12:43:55.237225 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:43:55 crc kubenswrapper[4698]: I1006 12:43:55.239574 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:43:55 crc kubenswrapper[4698]: I1006 12:43:55.239659 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" gracePeriod=600 Oct 06 12:43:55 crc kubenswrapper[4698]: E1006 12:43:55.379690 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:43:55 crc kubenswrapper[4698]: I1006 12:43:55.950252 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" exitCode=0 Oct 06 12:43:55 crc kubenswrapper[4698]: I1006 12:43:55.950836 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72"} Oct 06 12:43:55 crc kubenswrapper[4698]: I1006 12:43:55.950905 4698 scope.go:117] "RemoveContainer" containerID="23b26c6fa878f7db1dfb64f5a5429b7f7b3a04d592e96c8eeea33d2c2e3f4f23" Oct 06 12:43:55 crc kubenswrapper[4698]: I1006 12:43:55.952002 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:43:55 crc kubenswrapper[4698]: E1006 12:43:55.952472 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:44:09 crc kubenswrapper[4698]: I1006 12:44:09.330132 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:44:09 crc kubenswrapper[4698]: E1006 12:44:09.331493 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:44:23 crc kubenswrapper[4698]: I1006 12:44:23.361450 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:44:23 crc kubenswrapper[4698]: E1006 12:44:23.363570 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:44:35 crc kubenswrapper[4698]: I1006 12:44:35.328989 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:44:35 crc kubenswrapper[4698]: E1006 12:44:35.330256 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:44:46 crc kubenswrapper[4698]: I1006 12:44:46.330180 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:44:46 crc kubenswrapper[4698]: E1006 12:44:46.331429 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.201547 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj"] Oct 06 12:45:00 crc kubenswrapper[4698]: E1006 12:45:00.202736 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8119c8e5-4c78-4200-b240-7c2318319809" containerName="extract-content" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.202755 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8119c8e5-4c78-4200-b240-7c2318319809" containerName="extract-content" Oct 06 12:45:00 crc kubenswrapper[4698]: E1006 12:45:00.202787 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8119c8e5-4c78-4200-b240-7c2318319809" containerName="extract-utilities" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.202794 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8119c8e5-4c78-4200-b240-7c2318319809" containerName="extract-utilities" Oct 06 12:45:00 crc kubenswrapper[4698]: E1006 12:45:00.202836 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8119c8e5-4c78-4200-b240-7c2318319809" containerName="registry-server" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.202843 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8119c8e5-4c78-4200-b240-7c2318319809" containerName="registry-server" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.203111 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8119c8e5-4c78-4200-b240-7c2318319809" containerName="registry-server" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.203909 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.207727 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.208161 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.233292 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj"] Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.274546 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44fbd4c3-6c06-465d-aac4-1391de15548c-secret-volume\") pod \"collect-profiles-29329245-kpqbj\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.274684 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fbd4c3-6c06-465d-aac4-1391de15548c-config-volume\") pod \"collect-profiles-29329245-kpqbj\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.274949 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zltg\" (UniqueName: \"kubernetes.io/projected/44fbd4c3-6c06-465d-aac4-1391de15548c-kube-api-access-9zltg\") pod \"collect-profiles-29329245-kpqbj\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.329879 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:45:00 crc kubenswrapper[4698]: E1006 12:45:00.330239 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.377986 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44fbd4c3-6c06-465d-aac4-1391de15548c-secret-volume\") pod \"collect-profiles-29329245-kpqbj\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.378203 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fbd4c3-6c06-465d-aac4-1391de15548c-config-volume\") pod \"collect-profiles-29329245-kpqbj\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.378411 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zltg\" (UniqueName: \"kubernetes.io/projected/44fbd4c3-6c06-465d-aac4-1391de15548c-kube-api-access-9zltg\") pod \"collect-profiles-29329245-kpqbj\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.379829 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fbd4c3-6c06-465d-aac4-1391de15548c-config-volume\") pod \"collect-profiles-29329245-kpqbj\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.390192 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44fbd4c3-6c06-465d-aac4-1391de15548c-secret-volume\") pod \"collect-profiles-29329245-kpqbj\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.409942 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zltg\" (UniqueName: \"kubernetes.io/projected/44fbd4c3-6c06-465d-aac4-1391de15548c-kube-api-access-9zltg\") pod \"collect-profiles-29329245-kpqbj\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:00 crc kubenswrapper[4698]: I1006 12:45:00.555217 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:01 crc kubenswrapper[4698]: I1006 12:45:01.067408 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj"] Oct 06 12:45:01 crc kubenswrapper[4698]: I1006 12:45:01.934266 4698 generic.go:334] "Generic (PLEG): container finished" podID="44fbd4c3-6c06-465d-aac4-1391de15548c" containerID="0b4b33c879ff6a9d0572a2b309e9b7e7173af40843f668633ae088caacd06ef2" exitCode=0 Oct 06 12:45:01 crc kubenswrapper[4698]: I1006 12:45:01.934412 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" event={"ID":"44fbd4c3-6c06-465d-aac4-1391de15548c","Type":"ContainerDied","Data":"0b4b33c879ff6a9d0572a2b309e9b7e7173af40843f668633ae088caacd06ef2"} Oct 06 12:45:01 crc kubenswrapper[4698]: I1006 12:45:01.935168 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" event={"ID":"44fbd4c3-6c06-465d-aac4-1391de15548c","Type":"ContainerStarted","Data":"c1f3a1de51a7ce3a83aeb533b1092126d3aeeeba7a32858102ebb748dc93a9a5"} Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.404916 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.556106 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zltg\" (UniqueName: \"kubernetes.io/projected/44fbd4c3-6c06-465d-aac4-1391de15548c-kube-api-access-9zltg\") pod \"44fbd4c3-6c06-465d-aac4-1391de15548c\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.556446 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fbd4c3-6c06-465d-aac4-1391de15548c-config-volume\") pod \"44fbd4c3-6c06-465d-aac4-1391de15548c\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.556649 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44fbd4c3-6c06-465d-aac4-1391de15548c-secret-volume\") pod \"44fbd4c3-6c06-465d-aac4-1391de15548c\" (UID: \"44fbd4c3-6c06-465d-aac4-1391de15548c\") " Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.557130 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44fbd4c3-6c06-465d-aac4-1391de15548c-config-volume" (OuterVolumeSpecName: "config-volume") pod "44fbd4c3-6c06-465d-aac4-1391de15548c" (UID: "44fbd4c3-6c06-465d-aac4-1391de15548c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.557535 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fbd4c3-6c06-465d-aac4-1391de15548c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.563895 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44fbd4c3-6c06-465d-aac4-1391de15548c-kube-api-access-9zltg" (OuterVolumeSpecName: "kube-api-access-9zltg") pod "44fbd4c3-6c06-465d-aac4-1391de15548c" (UID: "44fbd4c3-6c06-465d-aac4-1391de15548c"). InnerVolumeSpecName "kube-api-access-9zltg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.565630 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44fbd4c3-6c06-465d-aac4-1391de15548c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44fbd4c3-6c06-465d-aac4-1391de15548c" (UID: "44fbd4c3-6c06-465d-aac4-1391de15548c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.659301 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zltg\" (UniqueName: \"kubernetes.io/projected/44fbd4c3-6c06-465d-aac4-1391de15548c-kube-api-access-9zltg\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.659339 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44fbd4c3-6c06-465d-aac4-1391de15548c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.965409 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" event={"ID":"44fbd4c3-6c06-465d-aac4-1391de15548c","Type":"ContainerDied","Data":"c1f3a1de51a7ce3a83aeb533b1092126d3aeeeba7a32858102ebb748dc93a9a5"} Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.966205 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f3a1de51a7ce3a83aeb533b1092126d3aeeeba7a32858102ebb748dc93a9a5" Oct 06 12:45:03 crc kubenswrapper[4698]: I1006 12:45:03.965499 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj" Oct 06 12:45:04 crc kubenswrapper[4698]: I1006 12:45:04.527084 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk"] Oct 06 12:45:04 crc kubenswrapper[4698]: I1006 12:45:04.538844 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329200-v4hpk"] Oct 06 12:45:05 crc kubenswrapper[4698]: I1006 12:45:05.341538 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9" path="/var/lib/kubelet/pods/ca2cfa42-d600-4fa2-ae6a-8c9b9f4083b9/volumes" Oct 06 12:45:14 crc kubenswrapper[4698]: I1006 12:45:14.329306 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:45:14 crc kubenswrapper[4698]: E1006 12:45:14.330755 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:45:26 crc kubenswrapper[4698]: I1006 12:45:26.329128 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:45:26 crc kubenswrapper[4698]: E1006 12:45:26.330487 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:45:39 crc kubenswrapper[4698]: I1006 12:45:39.331246 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:45:39 crc kubenswrapper[4698]: E1006 12:45:39.332083 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:45:43 crc kubenswrapper[4698]: I1006 12:45:43.977072 4698 scope.go:117] "RemoveContainer" containerID="c00346d332ba6bd78b557398ff2ddd1296d0ec260bfd8d6529b237f9de9a668b" Oct 06 12:45:50 crc kubenswrapper[4698]: I1006 12:45:50.329646 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:45:50 crc kubenswrapper[4698]: E1006 12:45:50.331080 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:46:03 crc kubenswrapper[4698]: I1006 12:46:03.348737 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:46:03 crc kubenswrapper[4698]: E1006 12:46:03.350158 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:46:16 crc kubenswrapper[4698]: I1006 12:46:16.329901 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:46:16 crc kubenswrapper[4698]: E1006 12:46:16.331066 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:46:28 crc kubenswrapper[4698]: I1006 12:46:28.329132 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:46:28 crc kubenswrapper[4698]: E1006 12:46:28.330367 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:46:40 crc kubenswrapper[4698]: I1006 12:46:40.330029 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:46:40 crc kubenswrapper[4698]: E1006 12:46:40.330980 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:46:52 crc kubenswrapper[4698]: I1006 12:46:52.329962 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:46:52 crc kubenswrapper[4698]: E1006 12:46:52.332731 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:47:05 crc kubenswrapper[4698]: I1006 12:47:05.329392 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:47:05 crc kubenswrapper[4698]: E1006 12:47:05.330805 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:47:18 crc kubenswrapper[4698]: I1006 12:47:18.330690 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:47:18 crc kubenswrapper[4698]: E1006 12:47:18.331917 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:47:29 crc kubenswrapper[4698]: I1006 12:47:29.329748 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:47:29 crc kubenswrapper[4698]: E1006 12:47:29.330721 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:47:44 crc kubenswrapper[4698]: I1006 12:47:44.330483 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:47:44 crc kubenswrapper[4698]: E1006 12:47:44.331920 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:47:56 crc kubenswrapper[4698]: I1006 12:47:56.329055 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:47:56 crc kubenswrapper[4698]: E1006 12:47:56.330245 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.059888 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xxvbn"] Oct 06 12:48:02 crc kubenswrapper[4698]: E1006 12:48:02.061599 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44fbd4c3-6c06-465d-aac4-1391de15548c" containerName="collect-profiles" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.061626 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="44fbd4c3-6c06-465d-aac4-1391de15548c" containerName="collect-profiles" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.061965 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="44fbd4c3-6c06-465d-aac4-1391de15548c" containerName="collect-profiles" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.067844 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.087372 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxvbn"] Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.143623 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-utilities\") pod \"redhat-marketplace-xxvbn\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.143701 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8dbs\" (UniqueName: \"kubernetes.io/projected/34069473-944d-4218-94ae-8c0dd36d17a8-kube-api-access-f8dbs\") pod \"redhat-marketplace-xxvbn\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.143845 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-catalog-content\") pod \"redhat-marketplace-xxvbn\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.245967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-utilities\") pod \"redhat-marketplace-xxvbn\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.246036 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8dbs\" (UniqueName: \"kubernetes.io/projected/34069473-944d-4218-94ae-8c0dd36d17a8-kube-api-access-f8dbs\") pod \"redhat-marketplace-xxvbn\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.246106 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-catalog-content\") pod \"redhat-marketplace-xxvbn\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.246548 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-utilities\") pod \"redhat-marketplace-xxvbn\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.246642 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-catalog-content\") pod \"redhat-marketplace-xxvbn\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.269183 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8dbs\" (UniqueName: \"kubernetes.io/projected/34069473-944d-4218-94ae-8c0dd36d17a8-kube-api-access-f8dbs\") pod \"redhat-marketplace-xxvbn\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.407936 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:02 crc kubenswrapper[4698]: I1006 12:48:02.925177 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxvbn"] Oct 06 12:48:03 crc kubenswrapper[4698]: I1006 12:48:03.211664 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxvbn" event={"ID":"34069473-944d-4218-94ae-8c0dd36d17a8","Type":"ContainerStarted","Data":"0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e"} Oct 06 12:48:03 crc kubenswrapper[4698]: I1006 12:48:03.211730 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxvbn" event={"ID":"34069473-944d-4218-94ae-8c0dd36d17a8","Type":"ContainerStarted","Data":"6ad7b1d7487544ff5ff7aba602abe55362f5e728cd49fd79581ad2c4f50b71cd"} Oct 06 12:48:03 crc kubenswrapper[4698]: I1006 12:48:03.217598 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:48:04 crc kubenswrapper[4698]: I1006 12:48:04.232482 4698 generic.go:334] "Generic (PLEG): container finished" podID="34069473-944d-4218-94ae-8c0dd36d17a8" containerID="0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e" exitCode=0 Oct 06 12:48:04 crc kubenswrapper[4698]: I1006 12:48:04.232662 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxvbn" event={"ID":"34069473-944d-4218-94ae-8c0dd36d17a8","Type":"ContainerDied","Data":"0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e"} Oct 06 12:48:05 crc kubenswrapper[4698]: I1006 12:48:05.254164 4698 generic.go:334] "Generic (PLEG): container finished" podID="34069473-944d-4218-94ae-8c0dd36d17a8" containerID="02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69" exitCode=0 Oct 06 12:48:05 crc kubenswrapper[4698]: I1006 12:48:05.254328 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxvbn" event={"ID":"34069473-944d-4218-94ae-8c0dd36d17a8","Type":"ContainerDied","Data":"02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69"} Oct 06 12:48:06 crc kubenswrapper[4698]: I1006 12:48:06.271755 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxvbn" event={"ID":"34069473-944d-4218-94ae-8c0dd36d17a8","Type":"ContainerStarted","Data":"653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2"} Oct 06 12:48:07 crc kubenswrapper[4698]: I1006 12:48:07.330359 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:48:07 crc kubenswrapper[4698]: E1006 12:48:07.331584 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:48:12 crc kubenswrapper[4698]: I1006 12:48:12.408245 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:12 crc kubenswrapper[4698]: I1006 12:48:12.409256 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:12 crc kubenswrapper[4698]: I1006 12:48:12.459871 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:12 crc kubenswrapper[4698]: I1006 12:48:12.481379 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xxvbn" podStartSLOduration=7.801238674 podStartE2EDuration="10.481349506s" podCreationTimestamp="2025-10-06 12:48:02 +0000 UTC" firstStartedPulling="2025-10-06 12:48:03.217294145 +0000 UTC m=+3770.629986328" lastFinishedPulling="2025-10-06 12:48:05.897404987 +0000 UTC m=+3773.310097160" observedRunningTime="2025-10-06 12:48:06.302181728 +0000 UTC m=+3773.714873921" watchObservedRunningTime="2025-10-06 12:48:12.481349506 +0000 UTC m=+3779.894041679" Oct 06 12:48:13 crc kubenswrapper[4698]: I1006 12:48:13.428081 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:13 crc kubenswrapper[4698]: I1006 12:48:13.505645 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxvbn"] Oct 06 12:48:15 crc kubenswrapper[4698]: I1006 12:48:15.399818 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xxvbn" podUID="34069473-944d-4218-94ae-8c0dd36d17a8" containerName="registry-server" containerID="cri-o://653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2" gracePeriod=2 Oct 06 12:48:15 crc kubenswrapper[4698]: I1006 12:48:15.962078 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.158141 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8dbs\" (UniqueName: \"kubernetes.io/projected/34069473-944d-4218-94ae-8c0dd36d17a8-kube-api-access-f8dbs\") pod \"34069473-944d-4218-94ae-8c0dd36d17a8\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.158257 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-utilities\") pod \"34069473-944d-4218-94ae-8c0dd36d17a8\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.159134 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-catalog-content\") pod \"34069473-944d-4218-94ae-8c0dd36d17a8\" (UID: \"34069473-944d-4218-94ae-8c0dd36d17a8\") " Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.160151 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-utilities" (OuterVolumeSpecName: "utilities") pod "34069473-944d-4218-94ae-8c0dd36d17a8" (UID: "34069473-944d-4218-94ae-8c0dd36d17a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.171157 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34069473-944d-4218-94ae-8c0dd36d17a8-kube-api-access-f8dbs" (OuterVolumeSpecName: "kube-api-access-f8dbs") pod "34069473-944d-4218-94ae-8c0dd36d17a8" (UID: "34069473-944d-4218-94ae-8c0dd36d17a8"). InnerVolumeSpecName "kube-api-access-f8dbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.184593 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34069473-944d-4218-94ae-8c0dd36d17a8" (UID: "34069473-944d-4218-94ae-8c0dd36d17a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.264722 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8dbs\" (UniqueName: \"kubernetes.io/projected/34069473-944d-4218-94ae-8c0dd36d17a8-kube-api-access-f8dbs\") on node \"crc\" DevicePath \"\"" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.264768 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.264778 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34069473-944d-4218-94ae-8c0dd36d17a8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.422579 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxvbn" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.422553 4698 generic.go:334] "Generic (PLEG): container finished" podID="34069473-944d-4218-94ae-8c0dd36d17a8" containerID="653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2" exitCode=0 Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.422639 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxvbn" event={"ID":"34069473-944d-4218-94ae-8c0dd36d17a8","Type":"ContainerDied","Data":"653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2"} Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.422717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxvbn" event={"ID":"34069473-944d-4218-94ae-8c0dd36d17a8","Type":"ContainerDied","Data":"6ad7b1d7487544ff5ff7aba602abe55362f5e728cd49fd79581ad2c4f50b71cd"} Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.422771 4698 scope.go:117] "RemoveContainer" containerID="653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.467612 4698 scope.go:117] "RemoveContainer" containerID="02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.470178 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxvbn"] Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.479230 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxvbn"] Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.513827 4698 scope.go:117] "RemoveContainer" containerID="0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.574560 4698 scope.go:117] "RemoveContainer" containerID="653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2" Oct 06 12:48:16 crc kubenswrapper[4698]: E1006 12:48:16.575072 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2\": container with ID starting with 653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2 not found: ID does not exist" containerID="653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.575131 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2"} err="failed to get container status \"653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2\": rpc error: code = NotFound desc = could not find container \"653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2\": container with ID starting with 653eb5b11b774816cce8dfad3b809586c6c821da457bbfaeb140d584b59cc9c2 not found: ID does not exist" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.575172 4698 scope.go:117] "RemoveContainer" containerID="02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69" Oct 06 12:48:16 crc kubenswrapper[4698]: E1006 12:48:16.575846 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69\": container with ID starting with 02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69 not found: ID does not exist" containerID="02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.575877 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69"} err="failed to get container status \"02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69\": rpc error: code = NotFound desc = could not find container \"02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69\": container with ID starting with 02a691ac7b816c93f8417510d396a74bd001f943f5a859eab40f3e44038c2a69 not found: ID does not exist" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.575919 4698 scope.go:117] "RemoveContainer" containerID="0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e" Oct 06 12:48:16 crc kubenswrapper[4698]: E1006 12:48:16.576287 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e\": container with ID starting with 0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e not found: ID does not exist" containerID="0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e" Oct 06 12:48:16 crc kubenswrapper[4698]: I1006 12:48:16.576323 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e"} err="failed to get container status \"0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e\": rpc error: code = NotFound desc = could not find container \"0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e\": container with ID starting with 0cf5f4d225efca079103b9c41dca48165bd5af64a9177803ee0314dba17d568e not found: ID does not exist" Oct 06 12:48:17 crc kubenswrapper[4698]: I1006 12:48:17.350285 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34069473-944d-4218-94ae-8c0dd36d17a8" path="/var/lib/kubelet/pods/34069473-944d-4218-94ae-8c0dd36d17a8/volumes" Oct 06 12:48:20 crc kubenswrapper[4698]: I1006 12:48:20.329710 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:48:20 crc kubenswrapper[4698]: E1006 12:48:20.330981 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:48:35 crc kubenswrapper[4698]: I1006 12:48:35.330156 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:48:35 crc kubenswrapper[4698]: E1006 12:48:35.331204 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:48:50 crc kubenswrapper[4698]: I1006 12:48:50.330545 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:48:50 crc kubenswrapper[4698]: E1006 12:48:50.333545 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:49:04 crc kubenswrapper[4698]: I1006 12:49:04.329807 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:49:05 crc kubenswrapper[4698]: I1006 12:49:05.115208 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"27675d2bd8d7dc9cb9b9e7c68433c151742c365f956200413d443494c6b378d5"} Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.266569 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9nf74"] Oct 06 12:50:02 crc kubenswrapper[4698]: E1006 12:50:02.268235 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34069473-944d-4218-94ae-8c0dd36d17a8" containerName="extract-content" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.268261 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="34069473-944d-4218-94ae-8c0dd36d17a8" containerName="extract-content" Oct 06 12:50:02 crc kubenswrapper[4698]: E1006 12:50:02.268286 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34069473-944d-4218-94ae-8c0dd36d17a8" containerName="registry-server" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.268297 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="34069473-944d-4218-94ae-8c0dd36d17a8" containerName="registry-server" Oct 06 12:50:02 crc kubenswrapper[4698]: E1006 12:50:02.268361 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34069473-944d-4218-94ae-8c0dd36d17a8" containerName="extract-utilities" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.268374 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="34069473-944d-4218-94ae-8c0dd36d17a8" containerName="extract-utilities" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.268712 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="34069473-944d-4218-94ae-8c0dd36d17a8" containerName="registry-server" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.272466 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.280469 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nf74"] Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.418392 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-catalog-content\") pod \"certified-operators-9nf74\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.418861 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-utilities\") pod \"certified-operators-9nf74\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.419587 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc7s7\" (UniqueName: \"kubernetes.io/projected/0756c8e6-1967-42f3-9be9-5e1a4b209db6-kube-api-access-kc7s7\") pod \"certified-operators-9nf74\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.522583 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc7s7\" (UniqueName: \"kubernetes.io/projected/0756c8e6-1967-42f3-9be9-5e1a4b209db6-kube-api-access-kc7s7\") pod \"certified-operators-9nf74\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.523333 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-catalog-content\") pod \"certified-operators-9nf74\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.523555 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-utilities\") pod \"certified-operators-9nf74\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.524050 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-catalog-content\") pod \"certified-operators-9nf74\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.524242 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-utilities\") pod \"certified-operators-9nf74\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.558149 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc7s7\" (UniqueName: \"kubernetes.io/projected/0756c8e6-1967-42f3-9be9-5e1a4b209db6-kube-api-access-kc7s7\") pod \"certified-operators-9nf74\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:02 crc kubenswrapper[4698]: I1006 12:50:02.649237 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:03 crc kubenswrapper[4698]: I1006 12:50:03.326414 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nf74"] Oct 06 12:50:03 crc kubenswrapper[4698]: I1006 12:50:03.941813 4698 generic.go:334] "Generic (PLEG): container finished" podID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerID="ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c" exitCode=0 Oct 06 12:50:03 crc kubenswrapper[4698]: I1006 12:50:03.942228 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nf74" event={"ID":"0756c8e6-1967-42f3-9be9-5e1a4b209db6","Type":"ContainerDied","Data":"ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c"} Oct 06 12:50:03 crc kubenswrapper[4698]: I1006 12:50:03.942268 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nf74" event={"ID":"0756c8e6-1967-42f3-9be9-5e1a4b209db6","Type":"ContainerStarted","Data":"bf3fd686ef0dd97293d85a58bdb57e6a90bebcdb607fffd092e4bd342af7b974"} Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.448740 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dgn2z"] Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.453460 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.466237 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgn2z"] Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.573894 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-catalog-content\") pod \"redhat-operators-dgn2z\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.573966 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-utilities\") pod \"redhat-operators-dgn2z\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.574685 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7b6\" (UniqueName: \"kubernetes.io/projected/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-kube-api-access-7z7b6\") pod \"redhat-operators-dgn2z\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.677426 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7b6\" (UniqueName: \"kubernetes.io/projected/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-kube-api-access-7z7b6\") pod \"redhat-operators-dgn2z\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.678068 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-catalog-content\") pod \"redhat-operators-dgn2z\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.678131 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-utilities\") pod \"redhat-operators-dgn2z\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.678681 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-catalog-content\") pod \"redhat-operators-dgn2z\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.678810 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-utilities\") pod \"redhat-operators-dgn2z\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.723240 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7b6\" (UniqueName: \"kubernetes.io/projected/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-kube-api-access-7z7b6\") pod \"redhat-operators-dgn2z\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:04 crc kubenswrapper[4698]: I1006 12:50:04.785634 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:05 crc kubenswrapper[4698]: I1006 12:50:05.388663 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgn2z"] Oct 06 12:50:05 crc kubenswrapper[4698]: I1006 12:50:05.974278 4698 generic.go:334] "Generic (PLEG): container finished" podID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerID="bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d" exitCode=0 Oct 06 12:50:05 crc kubenswrapper[4698]: I1006 12:50:05.974369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgn2z" event={"ID":"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07","Type":"ContainerDied","Data":"bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d"} Oct 06 12:50:05 crc kubenswrapper[4698]: I1006 12:50:05.974717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgn2z" event={"ID":"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07","Type":"ContainerStarted","Data":"716e4d952aa7df09a403cba3a154d69cb3afe3f1c8053dc79080fc71eabc8ea4"} Oct 06 12:50:05 crc kubenswrapper[4698]: I1006 12:50:05.981357 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nf74" event={"ID":"0756c8e6-1967-42f3-9be9-5e1a4b209db6","Type":"ContainerStarted","Data":"c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02"} Oct 06 12:50:06 crc kubenswrapper[4698]: I1006 12:50:06.998582 4698 generic.go:334] "Generic (PLEG): container finished" podID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerID="c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02" exitCode=0 Oct 06 12:50:06 crc kubenswrapper[4698]: I1006 12:50:06.998828 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nf74" event={"ID":"0756c8e6-1967-42f3-9be9-5e1a4b209db6","Type":"ContainerDied","Data":"c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02"} Oct 06 12:50:08 crc kubenswrapper[4698]: I1006 12:50:08.016739 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgn2z" event={"ID":"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07","Type":"ContainerStarted","Data":"9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6"} Oct 06 12:50:08 crc kubenswrapper[4698]: I1006 12:50:08.021141 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nf74" event={"ID":"0756c8e6-1967-42f3-9be9-5e1a4b209db6","Type":"ContainerStarted","Data":"b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f"} Oct 06 12:50:08 crc kubenswrapper[4698]: I1006 12:50:08.078805 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9nf74" podStartSLOduration=2.303953871 podStartE2EDuration="6.078779795s" podCreationTimestamp="2025-10-06 12:50:02 +0000 UTC" firstStartedPulling="2025-10-06 12:50:03.944753154 +0000 UTC m=+3891.357445337" lastFinishedPulling="2025-10-06 12:50:07.719579088 +0000 UTC m=+3895.132271261" observedRunningTime="2025-10-06 12:50:08.074812007 +0000 UTC m=+3895.487504210" watchObservedRunningTime="2025-10-06 12:50:08.078779795 +0000 UTC m=+3895.491471968" Oct 06 12:50:12 crc kubenswrapper[4698]: I1006 12:50:12.073192 4698 generic.go:334] "Generic (PLEG): container finished" podID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerID="9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6" exitCode=0 Oct 06 12:50:12 crc kubenswrapper[4698]: I1006 12:50:12.073303 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgn2z" event={"ID":"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07","Type":"ContainerDied","Data":"9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6"} Oct 06 12:50:12 crc kubenswrapper[4698]: I1006 12:50:12.650480 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:12 crc kubenswrapper[4698]: I1006 12:50:12.650761 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:12 crc kubenswrapper[4698]: I1006 12:50:12.744032 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:13 crc kubenswrapper[4698]: I1006 12:50:13.089309 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgn2z" event={"ID":"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07","Type":"ContainerStarted","Data":"15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249"} Oct 06 12:50:13 crc kubenswrapper[4698]: I1006 12:50:13.124224 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dgn2z" podStartSLOduration=2.402954975 podStartE2EDuration="9.124204978s" podCreationTimestamp="2025-10-06 12:50:04 +0000 UTC" firstStartedPulling="2025-10-06 12:50:05.97771545 +0000 UTC m=+3893.390407643" lastFinishedPulling="2025-10-06 12:50:12.698965463 +0000 UTC m=+3900.111657646" observedRunningTime="2025-10-06 12:50:13.12147593 +0000 UTC m=+3900.534168103" watchObservedRunningTime="2025-10-06 12:50:13.124204978 +0000 UTC m=+3900.536897151" Oct 06 12:50:13 crc kubenswrapper[4698]: I1006 12:50:13.168974 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:14 crc kubenswrapper[4698]: I1006 12:50:14.624164 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nf74"] Oct 06 12:50:14 crc kubenswrapper[4698]: I1006 12:50:14.786483 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:14 crc kubenswrapper[4698]: I1006 12:50:14.786990 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:15 crc kubenswrapper[4698]: I1006 12:50:15.860335 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dgn2z" podUID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerName="registry-server" probeResult="failure" output=< Oct 06 12:50:15 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 06 12:50:15 crc kubenswrapper[4698]: > Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.118050 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9nf74" podUID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerName="registry-server" containerID="cri-o://b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f" gracePeriod=2 Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.674628 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.834368 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-catalog-content\") pod \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.834609 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-utilities\") pod \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.834655 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc7s7\" (UniqueName: \"kubernetes.io/projected/0756c8e6-1967-42f3-9be9-5e1a4b209db6-kube-api-access-kc7s7\") pod \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\" (UID: \"0756c8e6-1967-42f3-9be9-5e1a4b209db6\") " Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.835503 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-utilities" (OuterVolumeSpecName: "utilities") pod "0756c8e6-1967-42f3-9be9-5e1a4b209db6" (UID: "0756c8e6-1967-42f3-9be9-5e1a4b209db6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.846348 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0756c8e6-1967-42f3-9be9-5e1a4b209db6-kube-api-access-kc7s7" (OuterVolumeSpecName: "kube-api-access-kc7s7") pod "0756c8e6-1967-42f3-9be9-5e1a4b209db6" (UID: "0756c8e6-1967-42f3-9be9-5e1a4b209db6"). InnerVolumeSpecName "kube-api-access-kc7s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.880534 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0756c8e6-1967-42f3-9be9-5e1a4b209db6" (UID: "0756c8e6-1967-42f3-9be9-5e1a4b209db6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.937472 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.937889 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0756c8e6-1967-42f3-9be9-5e1a4b209db6-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:50:16 crc kubenswrapper[4698]: I1006 12:50:16.937998 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc7s7\" (UniqueName: \"kubernetes.io/projected/0756c8e6-1967-42f3-9be9-5e1a4b209db6-kube-api-access-kc7s7\") on node \"crc\" DevicePath \"\"" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.134546 4698 generic.go:334] "Generic (PLEG): container finished" podID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerID="b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f" exitCode=0 Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.134632 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nf74" event={"ID":"0756c8e6-1967-42f3-9be9-5e1a4b209db6","Type":"ContainerDied","Data":"b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f"} Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.134693 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nf74" event={"ID":"0756c8e6-1967-42f3-9be9-5e1a4b209db6","Type":"ContainerDied","Data":"bf3fd686ef0dd97293d85a58bdb57e6a90bebcdb607fffd092e4bd342af7b974"} Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.134723 4698 scope.go:117] "RemoveContainer" containerID="b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.135270 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nf74" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.161400 4698 scope.go:117] "RemoveContainer" containerID="c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.183954 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nf74"] Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.195994 4698 scope.go:117] "RemoveContainer" containerID="ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.201581 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9nf74"] Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.239362 4698 scope.go:117] "RemoveContainer" containerID="b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f" Oct 06 12:50:17 crc kubenswrapper[4698]: E1006 12:50:17.240010 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f\": container with ID starting with b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f not found: ID does not exist" containerID="b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.240070 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f"} err="failed to get container status \"b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f\": rpc error: code = NotFound desc = could not find container \"b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f\": container with ID starting with b6429135561c4962d5deccc4aa106bd43b12228ecad1edba9f3d75293816978f not found: ID does not exist" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.240100 4698 scope.go:117] "RemoveContainer" containerID="c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02" Oct 06 12:50:17 crc kubenswrapper[4698]: E1006 12:50:17.240581 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02\": container with ID starting with c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02 not found: ID does not exist" containerID="c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.240646 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02"} err="failed to get container status \"c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02\": rpc error: code = NotFound desc = could not find container \"c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02\": container with ID starting with c434620736b9f904382fa68d7055a4777743d6119cfea5a8a6f54180dfb2df02 not found: ID does not exist" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.240695 4698 scope.go:117] "RemoveContainer" containerID="ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c" Oct 06 12:50:17 crc kubenswrapper[4698]: E1006 12:50:17.241228 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c\": container with ID starting with ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c not found: ID does not exist" containerID="ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.241308 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c"} err="failed to get container status \"ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c\": rpc error: code = NotFound desc = could not find container \"ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c\": container with ID starting with ffa5269e6f5a92781128a019ee7780492b0187b8e69046c9d41e076d4f824a6c not found: ID does not exist" Oct 06 12:50:17 crc kubenswrapper[4698]: I1006 12:50:17.341989 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" path="/var/lib/kubelet/pods/0756c8e6-1967-42f3-9be9-5e1a4b209db6/volumes" Oct 06 12:50:24 crc kubenswrapper[4698]: I1006 12:50:24.889098 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:24 crc kubenswrapper[4698]: I1006 12:50:24.975976 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:25 crc kubenswrapper[4698]: I1006 12:50:25.152506 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgn2z"] Oct 06 12:50:26 crc kubenswrapper[4698]: I1006 12:50:26.267644 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dgn2z" podUID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerName="registry-server" containerID="cri-o://15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249" gracePeriod=2 Oct 06 12:50:26 crc kubenswrapper[4698]: I1006 12:50:26.894955 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.002423 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-catalog-content\") pod \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.002622 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z7b6\" (UniqueName: \"kubernetes.io/projected/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-kube-api-access-7z7b6\") pod \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.002789 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-utilities\") pod \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\" (UID: \"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07\") " Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.003973 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-utilities" (OuterVolumeSpecName: "utilities") pod "8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" (UID: "8a266be4-c3f2-49cc-b9c1-87c19ccb1b07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.012311 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-kube-api-access-7z7b6" (OuterVolumeSpecName: "kube-api-access-7z7b6") pod "8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" (UID: "8a266be4-c3f2-49cc-b9c1-87c19ccb1b07"). InnerVolumeSpecName "kube-api-access-7z7b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.105263 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" (UID: "8a266be4-c3f2-49cc-b9c1-87c19ccb1b07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.105736 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.105783 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z7b6\" (UniqueName: \"kubernetes.io/projected/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-kube-api-access-7z7b6\") on node \"crc\" DevicePath \"\"" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.105798 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.280066 4698 generic.go:334] "Generic (PLEG): container finished" podID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerID="15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249" exitCode=0 Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.280138 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgn2z" event={"ID":"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07","Type":"ContainerDied","Data":"15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249"} Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.280171 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgn2z" event={"ID":"8a266be4-c3f2-49cc-b9c1-87c19ccb1b07","Type":"ContainerDied","Data":"716e4d952aa7df09a403cba3a154d69cb3afe3f1c8053dc79080fc71eabc8ea4"} Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.280195 4698 scope.go:117] "RemoveContainer" containerID="15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.280251 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgn2z" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.318847 4698 scope.go:117] "RemoveContainer" containerID="9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.345946 4698 scope.go:117] "RemoveContainer" containerID="bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.347785 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgn2z"] Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.349482 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dgn2z"] Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.422419 4698 scope.go:117] "RemoveContainer" containerID="15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249" Oct 06 12:50:27 crc kubenswrapper[4698]: E1006 12:50:27.423154 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249\": container with ID starting with 15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249 not found: ID does not exist" containerID="15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.423240 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249"} err="failed to get container status \"15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249\": rpc error: code = NotFound desc = could not find container \"15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249\": container with ID starting with 15a606c6ecd5b09a340f764cffa1120a51e3cd3bbd85d56d0ec6fe766088b249 not found: ID does not exist" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.423292 4698 scope.go:117] "RemoveContainer" containerID="9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6" Oct 06 12:50:27 crc kubenswrapper[4698]: E1006 12:50:27.423739 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6\": container with ID starting with 9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6 not found: ID does not exist" containerID="9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.423815 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6"} err="failed to get container status \"9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6\": rpc error: code = NotFound desc = could not find container \"9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6\": container with ID starting with 9bfc4e9b20a968230f07e79f92ae6ce1660c308219314009b3412f6dad6b08b6 not found: ID does not exist" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.423858 4698 scope.go:117] "RemoveContainer" containerID="bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d" Oct 06 12:50:27 crc kubenswrapper[4698]: E1006 12:50:27.424308 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d\": container with ID starting with bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d not found: ID does not exist" containerID="bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d" Oct 06 12:50:27 crc kubenswrapper[4698]: I1006 12:50:27.424360 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d"} err="failed to get container status \"bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d\": rpc error: code = NotFound desc = could not find container \"bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d\": container with ID starting with bf93fa38beca5b4926c36c509c04ce001885ef879491feda39ab2f29de88744d not found: ID does not exist" Oct 06 12:50:29 crc kubenswrapper[4698]: I1006 12:50:29.343680 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" path="/var/lib/kubelet/pods/8a266be4-c3f2-49cc-b9c1-87c19ccb1b07/volumes" Oct 06 12:51:25 crc kubenswrapper[4698]: I1006 12:51:25.235706 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:51:25 crc kubenswrapper[4698]: I1006 12:51:25.236593 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:51:55 crc kubenswrapper[4698]: I1006 12:51:55.234857 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:51:55 crc kubenswrapper[4698]: I1006 12:51:55.236480 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:52:05 crc kubenswrapper[4698]: E1006 12:52:05.426322 4698 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.97:41682->38.102.83.97:46695: write tcp 38.102.83.97:41682->38.102.83.97:46695: write: broken pipe Oct 06 12:52:25 crc kubenswrapper[4698]: I1006 12:52:25.235050 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:52:25 crc kubenswrapper[4698]: I1006 12:52:25.236221 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:52:25 crc kubenswrapper[4698]: I1006 12:52:25.236307 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:52:25 crc kubenswrapper[4698]: I1006 12:52:25.237712 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27675d2bd8d7dc9cb9b9e7c68433c151742c365f956200413d443494c6b378d5"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:52:25 crc kubenswrapper[4698]: I1006 12:52:25.237789 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://27675d2bd8d7dc9cb9b9e7c68433c151742c365f956200413d443494c6b378d5" gracePeriod=600 Oct 06 12:52:25 crc kubenswrapper[4698]: I1006 12:52:25.839786 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="27675d2bd8d7dc9cb9b9e7c68433c151742c365f956200413d443494c6b378d5" exitCode=0 Oct 06 12:52:25 crc kubenswrapper[4698]: I1006 12:52:25.839854 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"27675d2bd8d7dc9cb9b9e7c68433c151742c365f956200413d443494c6b378d5"} Oct 06 12:52:25 crc kubenswrapper[4698]: I1006 12:52:25.840710 4698 scope.go:117] "RemoveContainer" containerID="c4c83ca3930d120d0ae376401e679bd65358272ee9a7712ad8866eef2556db72" Oct 06 12:52:27 crc kubenswrapper[4698]: I1006 12:52:27.876330 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84"} Oct 06 12:54:55 crc kubenswrapper[4698]: I1006 12:54:55.235826 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:54:55 crc kubenswrapper[4698]: I1006 12:54:55.237832 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:55:25 crc kubenswrapper[4698]: I1006 12:55:25.234865 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:55:25 crc kubenswrapper[4698]: I1006 12:55:25.235889 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:55:55 crc kubenswrapper[4698]: I1006 12:55:55.235567 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 12:55:55 crc kubenswrapper[4698]: I1006 12:55:55.236771 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 12:55:55 crc kubenswrapper[4698]: I1006 12:55:55.236875 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 12:55:55 crc kubenswrapper[4698]: I1006 12:55:55.238394 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 12:55:55 crc kubenswrapper[4698]: I1006 12:55:55.238510 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" gracePeriod=600 Oct 06 12:55:55 crc kubenswrapper[4698]: E1006 12:55:55.411778 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:55:55 crc kubenswrapper[4698]: I1006 12:55:55.477695 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" exitCode=0 Oct 06 12:55:55 crc kubenswrapper[4698]: I1006 12:55:55.477762 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84"} Oct 06 12:55:55 crc kubenswrapper[4698]: I1006 12:55:55.477822 4698 scope.go:117] "RemoveContainer" containerID="27675d2bd8d7dc9cb9b9e7c68433c151742c365f956200413d443494c6b378d5" Oct 06 12:55:55 crc kubenswrapper[4698]: I1006 12:55:55.478785 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:55:55 crc kubenswrapper[4698]: E1006 12:55:55.480034 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:56:09 crc kubenswrapper[4698]: I1006 12:56:09.328913 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:56:09 crc kubenswrapper[4698]: E1006 12:56:09.330169 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:56:20 crc kubenswrapper[4698]: I1006 12:56:20.329172 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:56:20 crc kubenswrapper[4698]: E1006 12:56:20.329860 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:56:31 crc kubenswrapper[4698]: I1006 12:56:31.329785 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:56:31 crc kubenswrapper[4698]: E1006 12:56:31.330741 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:56:43 crc kubenswrapper[4698]: I1006 12:56:43.347172 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:56:43 crc kubenswrapper[4698]: E1006 12:56:43.348917 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:56:54 crc kubenswrapper[4698]: I1006 12:56:54.330505 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:56:54 crc kubenswrapper[4698]: E1006 12:56:54.331836 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:57:07 crc kubenswrapper[4698]: I1006 12:57:07.329988 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:57:07 crc kubenswrapper[4698]: E1006 12:57:07.331783 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:57:19 crc kubenswrapper[4698]: I1006 12:57:19.330721 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:57:19 crc kubenswrapper[4698]: E1006 12:57:19.331555 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.409941 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-68t6d"] Oct 06 12:57:23 crc kubenswrapper[4698]: E1006 12:57:23.411606 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerName="extract-content" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.411620 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerName="extract-content" Oct 06 12:57:23 crc kubenswrapper[4698]: E1006 12:57:23.411643 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerName="extract-utilities" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.411649 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerName="extract-utilities" Oct 06 12:57:23 crc kubenswrapper[4698]: E1006 12:57:23.411671 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerName="extract-content" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.411677 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerName="extract-content" Oct 06 12:57:23 crc kubenswrapper[4698]: E1006 12:57:23.411689 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerName="registry-server" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.411694 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerName="registry-server" Oct 06 12:57:23 crc kubenswrapper[4698]: E1006 12:57:23.411722 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerName="extract-utilities" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.411728 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerName="extract-utilities" Oct 06 12:57:23 crc kubenswrapper[4698]: E1006 12:57:23.411739 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerName="registry-server" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.411744 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerName="registry-server" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.411929 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0756c8e6-1967-42f3-9be9-5e1a4b209db6" containerName="registry-server" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.411949 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a266be4-c3f2-49cc-b9c1-87c19ccb1b07" containerName="registry-server" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.413653 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.424288 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-68t6d"] Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.482064 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-529x9\" (UniqueName: \"kubernetes.io/projected/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-kube-api-access-529x9\") pod \"community-operators-68t6d\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.482250 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-catalog-content\") pod \"community-operators-68t6d\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.482626 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-utilities\") pod \"community-operators-68t6d\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.584787 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-utilities\") pod \"community-operators-68t6d\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.584862 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-529x9\" (UniqueName: \"kubernetes.io/projected/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-kube-api-access-529x9\") pod \"community-operators-68t6d\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.584966 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-catalog-content\") pod \"community-operators-68t6d\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.585524 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-catalog-content\") pod \"community-operators-68t6d\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.588072 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-utilities\") pod \"community-operators-68t6d\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.622255 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-529x9\" (UniqueName: \"kubernetes.io/projected/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-kube-api-access-529x9\") pod \"community-operators-68t6d\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:23 crc kubenswrapper[4698]: I1006 12:57:23.742853 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:24 crc kubenswrapper[4698]: I1006 12:57:24.264914 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-68t6d"] Oct 06 12:57:25 crc kubenswrapper[4698]: I1006 12:57:25.609476 4698 generic.go:334] "Generic (PLEG): container finished" podID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerID="fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572" exitCode=0 Oct 06 12:57:25 crc kubenswrapper[4698]: I1006 12:57:25.609740 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68t6d" event={"ID":"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85","Type":"ContainerDied","Data":"fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572"} Oct 06 12:57:25 crc kubenswrapper[4698]: I1006 12:57:25.610091 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68t6d" event={"ID":"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85","Type":"ContainerStarted","Data":"4be90927ccfce74708627a2fd335c02730a4e6c5d1d9539460c9d5f6bb136a2f"} Oct 06 12:57:25 crc kubenswrapper[4698]: I1006 12:57:25.612418 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 12:57:27 crc kubenswrapper[4698]: I1006 12:57:27.636061 4698 generic.go:334] "Generic (PLEG): container finished" podID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerID="57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a" exitCode=0 Oct 06 12:57:27 crc kubenswrapper[4698]: I1006 12:57:27.636156 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68t6d" event={"ID":"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85","Type":"ContainerDied","Data":"57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a"} Oct 06 12:57:28 crc kubenswrapper[4698]: I1006 12:57:28.650620 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68t6d" event={"ID":"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85","Type":"ContainerStarted","Data":"60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a"} Oct 06 12:57:28 crc kubenswrapper[4698]: I1006 12:57:28.678649 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-68t6d" podStartSLOduration=3.260325722 podStartE2EDuration="5.678628929s" podCreationTimestamp="2025-10-06 12:57:23 +0000 UTC" firstStartedPulling="2025-10-06 12:57:25.612132181 +0000 UTC m=+4333.024824354" lastFinishedPulling="2025-10-06 12:57:28.030435388 +0000 UTC m=+4335.443127561" observedRunningTime="2025-10-06 12:57:28.672535409 +0000 UTC m=+4336.085227602" watchObservedRunningTime="2025-10-06 12:57:28.678628929 +0000 UTC m=+4336.091321102" Oct 06 12:57:33 crc kubenswrapper[4698]: I1006 12:57:33.334734 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:57:33 crc kubenswrapper[4698]: E1006 12:57:33.336409 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:57:33 crc kubenswrapper[4698]: I1006 12:57:33.743346 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:33 crc kubenswrapper[4698]: I1006 12:57:33.743719 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:33 crc kubenswrapper[4698]: I1006 12:57:33.804918 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:34 crc kubenswrapper[4698]: I1006 12:57:34.760664 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:35 crc kubenswrapper[4698]: I1006 12:57:35.793230 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-68t6d"] Oct 06 12:57:36 crc kubenswrapper[4698]: I1006 12:57:36.746392 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-68t6d" podUID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerName="registry-server" containerID="cri-o://60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a" gracePeriod=2 Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.293602 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.426119 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-catalog-content\") pod \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.426195 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-529x9\" (UniqueName: \"kubernetes.io/projected/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-kube-api-access-529x9\") pod \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.426439 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-utilities\") pod \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\" (UID: \"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85\") " Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.427356 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-utilities" (OuterVolumeSpecName: "utilities") pod "3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" (UID: "3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.530212 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.771648 4698 generic.go:334] "Generic (PLEG): container finished" podID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerID="60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a" exitCode=0 Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.771701 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68t6d" event={"ID":"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85","Type":"ContainerDied","Data":"60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a"} Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.771740 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68t6d" event={"ID":"3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85","Type":"ContainerDied","Data":"4be90927ccfce74708627a2fd335c02730a4e6c5d1d9539460c9d5f6bb136a2f"} Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.771761 4698 scope.go:117] "RemoveContainer" containerID="60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a" Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.771939 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68t6d" Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.813820 4698 scope.go:117] "RemoveContainer" containerID="57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a" Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.905858 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" (UID: "3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 12:57:37 crc kubenswrapper[4698]: I1006 12:57:37.938847 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.008796 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-kube-api-access-529x9" (OuterVolumeSpecName: "kube-api-access-529x9") pod "3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" (UID: "3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85"). InnerVolumeSpecName "kube-api-access-529x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.033395 4698 scope.go:117] "RemoveContainer" containerID="fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572" Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.039921 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-529x9\" (UniqueName: \"kubernetes.io/projected/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85-kube-api-access-529x9\") on node \"crc\" DevicePath \"\"" Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.182066 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-68t6d"] Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.190725 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-68t6d"] Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.218494 4698 scope.go:117] "RemoveContainer" containerID="60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a" Oct 06 12:57:38 crc kubenswrapper[4698]: E1006 12:57:38.218926 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a\": container with ID starting with 60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a not found: ID does not exist" containerID="60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a" Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.219029 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a"} err="failed to get container status \"60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a\": rpc error: code = NotFound desc = could not find container \"60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a\": container with ID starting with 60f33a1e0a58d15947482700441610225594ded9bb93cced6471450fc128457a not found: ID does not exist" Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.219067 4698 scope.go:117] "RemoveContainer" containerID="57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a" Oct 06 12:57:38 crc kubenswrapper[4698]: E1006 12:57:38.219413 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a\": container with ID starting with 57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a not found: ID does not exist" containerID="57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a" Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.219481 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a"} err="failed to get container status \"57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a\": rpc error: code = NotFound desc = could not find container \"57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a\": container with ID starting with 57c862b40eceaeb770792aa1e80acc9b7993cb3dabec4e133be0a54eab5a6f3a not found: ID does not exist" Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.219524 4698 scope.go:117] "RemoveContainer" containerID="fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572" Oct 06 12:57:38 crc kubenswrapper[4698]: E1006 12:57:38.219856 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572\": container with ID starting with fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572 not found: ID does not exist" containerID="fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572" Oct 06 12:57:38 crc kubenswrapper[4698]: I1006 12:57:38.219886 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572"} err="failed to get container status \"fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572\": rpc error: code = NotFound desc = could not find container \"fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572\": container with ID starting with fd99dbe2c508afb471d2957d1adfc00622138d85a6fa8bcc1aa8b676d8405572 not found: ID does not exist" Oct 06 12:57:39 crc kubenswrapper[4698]: I1006 12:57:39.352948 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" path="/var/lib/kubelet/pods/3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85/volumes" Oct 06 12:57:44 crc kubenswrapper[4698]: I1006 12:57:44.329288 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:57:44 crc kubenswrapper[4698]: E1006 12:57:44.330282 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:57:57 crc kubenswrapper[4698]: I1006 12:57:57.330089 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:57:57 crc kubenswrapper[4698]: E1006 12:57:57.331332 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:58:12 crc kubenswrapper[4698]: I1006 12:58:12.329534 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:58:12 crc kubenswrapper[4698]: E1006 12:58:12.330400 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:58:26 crc kubenswrapper[4698]: I1006 12:58:26.330178 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:58:26 crc kubenswrapper[4698]: E1006 12:58:26.331325 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:58:41 crc kubenswrapper[4698]: I1006 12:58:41.330445 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:58:41 crc kubenswrapper[4698]: E1006 12:58:41.331853 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:58:55 crc kubenswrapper[4698]: I1006 12:58:55.332332 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:58:55 crc kubenswrapper[4698]: E1006 12:58:55.333562 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:59:10 crc kubenswrapper[4698]: I1006 12:59:10.329667 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:59:10 crc kubenswrapper[4698]: E1006 12:59:10.330584 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:59:23 crc kubenswrapper[4698]: I1006 12:59:23.338036 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:59:23 crc kubenswrapper[4698]: E1006 12:59:23.339540 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:59:34 crc kubenswrapper[4698]: I1006 12:59:34.330099 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:59:34 crc kubenswrapper[4698]: E1006 12:59:34.331399 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 12:59:49 crc kubenswrapper[4698]: I1006 12:59:49.330427 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 12:59:49 crc kubenswrapper[4698]: E1006 12:59:49.331634 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.166971 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw"] Oct 06 13:00:00 crc kubenswrapper[4698]: E1006 13:00:00.168180 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerName="extract-utilities" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.168198 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerName="extract-utilities" Oct 06 13:00:00 crc kubenswrapper[4698]: E1006 13:00:00.168227 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerName="extract-content" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.168235 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerName="extract-content" Oct 06 13:00:00 crc kubenswrapper[4698]: E1006 13:00:00.168270 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerName="registry-server" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.168278 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerName="registry-server" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.168817 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7630e4-bd0c-4d08-9a2f-22b4baaf1a85" containerName="registry-server" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.169922 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.171881 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.172702 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.180360 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw"] Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.310880 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c00f1f-a1ba-4793-aa6e-4ffa35418079-secret-volume\") pod \"collect-profiles-29329260-lgsdw\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.311069 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-599rm\" (UniqueName: \"kubernetes.io/projected/34c00f1f-a1ba-4793-aa6e-4ffa35418079-kube-api-access-599rm\") pod \"collect-profiles-29329260-lgsdw\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.311105 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c00f1f-a1ba-4793-aa6e-4ffa35418079-config-volume\") pod \"collect-profiles-29329260-lgsdw\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.412873 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c00f1f-a1ba-4793-aa6e-4ffa35418079-secret-volume\") pod \"collect-profiles-29329260-lgsdw\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.413127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-599rm\" (UniqueName: \"kubernetes.io/projected/34c00f1f-a1ba-4793-aa6e-4ffa35418079-kube-api-access-599rm\") pod \"collect-profiles-29329260-lgsdw\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.413160 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c00f1f-a1ba-4793-aa6e-4ffa35418079-config-volume\") pod \"collect-profiles-29329260-lgsdw\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.414205 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c00f1f-a1ba-4793-aa6e-4ffa35418079-config-volume\") pod \"collect-profiles-29329260-lgsdw\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.419834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c00f1f-a1ba-4793-aa6e-4ffa35418079-secret-volume\") pod \"collect-profiles-29329260-lgsdw\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.433773 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-599rm\" (UniqueName: \"kubernetes.io/projected/34c00f1f-a1ba-4793-aa6e-4ffa35418079-kube-api-access-599rm\") pod \"collect-profiles-29329260-lgsdw\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:00 crc kubenswrapper[4698]: I1006 13:00:00.507433 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:01 crc kubenswrapper[4698]: I1006 13:00:01.019727 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw"] Oct 06 13:00:01 crc kubenswrapper[4698]: I1006 13:00:01.430177 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" event={"ID":"34c00f1f-a1ba-4793-aa6e-4ffa35418079","Type":"ContainerStarted","Data":"b437af2e069e75c002d354c91c328dfe02f183bc04867689820292ea319a1a16"} Oct 06 13:00:02 crc kubenswrapper[4698]: I1006 13:00:02.329470 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 13:00:02 crc kubenswrapper[4698]: E1006 13:00:02.330356 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:00:02 crc kubenswrapper[4698]: I1006 13:00:02.443905 4698 generic.go:334] "Generic (PLEG): container finished" podID="34c00f1f-a1ba-4793-aa6e-4ffa35418079" containerID="e27415881766870f71aaefc0bc56d61c802f07c78a9b62073cf3cde637b132ff" exitCode=0 Oct 06 13:00:02 crc kubenswrapper[4698]: I1006 13:00:02.443968 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" event={"ID":"34c00f1f-a1ba-4793-aa6e-4ffa35418079","Type":"ContainerDied","Data":"e27415881766870f71aaefc0bc56d61c802f07c78a9b62073cf3cde637b132ff"} Oct 06 13:00:03 crc kubenswrapper[4698]: I1006 13:00:03.838454 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.007966 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-599rm\" (UniqueName: \"kubernetes.io/projected/34c00f1f-a1ba-4793-aa6e-4ffa35418079-kube-api-access-599rm\") pod \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.008022 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c00f1f-a1ba-4793-aa6e-4ffa35418079-secret-volume\") pod \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.008170 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c00f1f-a1ba-4793-aa6e-4ffa35418079-config-volume\") pod \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\" (UID: \"34c00f1f-a1ba-4793-aa6e-4ffa35418079\") " Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.008965 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c00f1f-a1ba-4793-aa6e-4ffa35418079-config-volume" (OuterVolumeSpecName: "config-volume") pod "34c00f1f-a1ba-4793-aa6e-4ffa35418079" (UID: "34c00f1f-a1ba-4793-aa6e-4ffa35418079"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.015434 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c00f1f-a1ba-4793-aa6e-4ffa35418079-kube-api-access-599rm" (OuterVolumeSpecName: "kube-api-access-599rm") pod "34c00f1f-a1ba-4793-aa6e-4ffa35418079" (UID: "34c00f1f-a1ba-4793-aa6e-4ffa35418079"). InnerVolumeSpecName "kube-api-access-599rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.016973 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c00f1f-a1ba-4793-aa6e-4ffa35418079-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34c00f1f-a1ba-4793-aa6e-4ffa35418079" (UID: "34c00f1f-a1ba-4793-aa6e-4ffa35418079"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.110971 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-599rm\" (UniqueName: \"kubernetes.io/projected/34c00f1f-a1ba-4793-aa6e-4ffa35418079-kube-api-access-599rm\") on node \"crc\" DevicePath \"\"" Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.111054 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c00f1f-a1ba-4793-aa6e-4ffa35418079-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.111071 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c00f1f-a1ba-4793-aa6e-4ffa35418079-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.466481 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" event={"ID":"34c00f1f-a1ba-4793-aa6e-4ffa35418079","Type":"ContainerDied","Data":"b437af2e069e75c002d354c91c328dfe02f183bc04867689820292ea319a1a16"} Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.466535 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b437af2e069e75c002d354c91c328dfe02f183bc04867689820292ea319a1a16" Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.466602 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329260-lgsdw" Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.916262 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f"] Oct 06 13:00:04 crc kubenswrapper[4698]: I1006 13:00:04.926645 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329215-5xm5f"] Oct 06 13:00:05 crc kubenswrapper[4698]: I1006 13:00:05.347631 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316e3a91-b7c6-468d-aff2-fef1ed882113" path="/var/lib/kubelet/pods/316e3a91-b7c6-468d-aff2-fef1ed882113/volumes" Oct 06 13:00:13 crc kubenswrapper[4698]: I1006 13:00:13.337166 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 13:00:13 crc kubenswrapper[4698]: E1006 13:00:13.338098 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:00:26 crc kubenswrapper[4698]: I1006 13:00:26.328916 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 13:00:26 crc kubenswrapper[4698]: E1006 13:00:26.329793 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:00:41 crc kubenswrapper[4698]: I1006 13:00:41.329344 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 13:00:41 crc kubenswrapper[4698]: E1006 13:00:41.330296 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:00:44 crc kubenswrapper[4698]: I1006 13:00:44.488365 4698 scope.go:117] "RemoveContainer" containerID="54047dedbd345b5fad8b9caa1736340b250f5fdb00dae6412586052585c79c1a" Oct 06 13:00:53 crc kubenswrapper[4698]: I1006 13:00:53.337316 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 13:00:53 crc kubenswrapper[4698]: E1006 13:00:53.338443 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.186041 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29329261-9wv7t"] Oct 06 13:01:00 crc kubenswrapper[4698]: E1006 13:01:00.187460 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c00f1f-a1ba-4793-aa6e-4ffa35418079" containerName="collect-profiles" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.187486 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c00f1f-a1ba-4793-aa6e-4ffa35418079" containerName="collect-profiles" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.187806 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c00f1f-a1ba-4793-aa6e-4ffa35418079" containerName="collect-profiles" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.189050 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.202900 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29329261-9wv7t"] Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.278405 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfpvk\" (UniqueName: \"kubernetes.io/projected/4a476195-2a9a-4be4-8199-16903da18935-kube-api-access-zfpvk\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.278791 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-fernet-keys\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.278842 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-combined-ca-bundle\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.279119 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-config-data\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.380856 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-config-data\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.381097 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfpvk\" (UniqueName: \"kubernetes.io/projected/4a476195-2a9a-4be4-8199-16903da18935-kube-api-access-zfpvk\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.381120 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-fernet-keys\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.381184 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-combined-ca-bundle\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.389236 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-combined-ca-bundle\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.390995 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-config-data\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.391507 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-fernet-keys\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.399274 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfpvk\" (UniqueName: \"kubernetes.io/projected/4a476195-2a9a-4be4-8199-16903da18935-kube-api-access-zfpvk\") pod \"keystone-cron-29329261-9wv7t\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:00 crc kubenswrapper[4698]: I1006 13:01:00.528425 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:01 crc kubenswrapper[4698]: I1006 13:01:01.013763 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29329261-9wv7t"] Oct 06 13:01:01 crc kubenswrapper[4698]: I1006 13:01:01.080658 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329261-9wv7t" event={"ID":"4a476195-2a9a-4be4-8199-16903da18935","Type":"ContainerStarted","Data":"4aedab018e286b061a76e8bc7f345d81ade4ff9b1845e79d7adefef3a9ab7fb7"} Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.097419 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329261-9wv7t" event={"ID":"4a476195-2a9a-4be4-8199-16903da18935","Type":"ContainerStarted","Data":"eb0d64e950616da3b4c5877f6fbdbaf788535bb6a13d85f0c68639b0fd79f6de"} Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.124957 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29329261-9wv7t" podStartSLOduration=2.124934043 podStartE2EDuration="2.124934043s" podCreationTimestamp="2025-10-06 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:01:02.123909808 +0000 UTC m=+4549.536601971" watchObservedRunningTime="2025-10-06 13:01:02.124934043 +0000 UTC m=+4549.537626216" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.204416 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7tzsp"] Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.209280 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.223448 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7tzsp"] Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.248181 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-catalog-content\") pod \"redhat-operators-7tzsp\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.248630 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znvjl\" (UniqueName: \"kubernetes.io/projected/c3b53dc2-0b12-432c-936c-26be408a659b-kube-api-access-znvjl\") pod \"redhat-operators-7tzsp\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.248937 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-utilities\") pod \"redhat-operators-7tzsp\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.350247 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-utilities\") pod \"redhat-operators-7tzsp\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.350839 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-utilities\") pod \"redhat-operators-7tzsp\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.351235 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-catalog-content\") pod \"redhat-operators-7tzsp\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.351707 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-catalog-content\") pod \"redhat-operators-7tzsp\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.351818 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znvjl\" (UniqueName: \"kubernetes.io/projected/c3b53dc2-0b12-432c-936c-26be408a659b-kube-api-access-znvjl\") pod \"redhat-operators-7tzsp\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.379495 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znvjl\" (UniqueName: \"kubernetes.io/projected/c3b53dc2-0b12-432c-936c-26be408a659b-kube-api-access-znvjl\") pod \"redhat-operators-7tzsp\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:02 crc kubenswrapper[4698]: I1006 13:01:02.573061 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:03 crc kubenswrapper[4698]: I1006 13:01:03.103192 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7tzsp"] Oct 06 13:01:03 crc kubenswrapper[4698]: W1006 13:01:03.113626 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b53dc2_0b12_432c_936c_26be408a659b.slice/crio-82604f99e8545a9fc7728da623539e6722b3b8f8189c4fa66b024cbe0bc10b21 WatchSource:0}: Error finding container 82604f99e8545a9fc7728da623539e6722b3b8f8189c4fa66b024cbe0bc10b21: Status 404 returned error can't find the container with id 82604f99e8545a9fc7728da623539e6722b3b8f8189c4fa66b024cbe0bc10b21 Oct 06 13:01:04 crc kubenswrapper[4698]: I1006 13:01:04.136301 4698 generic.go:334] "Generic (PLEG): container finished" podID="4a476195-2a9a-4be4-8199-16903da18935" containerID="eb0d64e950616da3b4c5877f6fbdbaf788535bb6a13d85f0c68639b0fd79f6de" exitCode=0 Oct 06 13:01:04 crc kubenswrapper[4698]: I1006 13:01:04.136408 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329261-9wv7t" event={"ID":"4a476195-2a9a-4be4-8199-16903da18935","Type":"ContainerDied","Data":"eb0d64e950616da3b4c5877f6fbdbaf788535bb6a13d85f0c68639b0fd79f6de"} Oct 06 13:01:04 crc kubenswrapper[4698]: I1006 13:01:04.140316 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzsp" event={"ID":"c3b53dc2-0b12-432c-936c-26be408a659b","Type":"ContainerStarted","Data":"82604f99e8545a9fc7728da623539e6722b3b8f8189c4fa66b024cbe0bc10b21"} Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.156099 4698 generic.go:334] "Generic (PLEG): container finished" podID="c3b53dc2-0b12-432c-936c-26be408a659b" containerID="1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482" exitCode=0 Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.156200 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzsp" event={"ID":"c3b53dc2-0b12-432c-936c-26be408a659b","Type":"ContainerDied","Data":"1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482"} Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.528466 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.638295 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfpvk\" (UniqueName: \"kubernetes.io/projected/4a476195-2a9a-4be4-8199-16903da18935-kube-api-access-zfpvk\") pod \"4a476195-2a9a-4be4-8199-16903da18935\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.638406 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-config-data\") pod \"4a476195-2a9a-4be4-8199-16903da18935\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.638570 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-combined-ca-bundle\") pod \"4a476195-2a9a-4be4-8199-16903da18935\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.638829 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-fernet-keys\") pod \"4a476195-2a9a-4be4-8199-16903da18935\" (UID: \"4a476195-2a9a-4be4-8199-16903da18935\") " Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.647910 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4a476195-2a9a-4be4-8199-16903da18935" (UID: "4a476195-2a9a-4be4-8199-16903da18935"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.648758 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a476195-2a9a-4be4-8199-16903da18935-kube-api-access-zfpvk" (OuterVolumeSpecName: "kube-api-access-zfpvk") pod "4a476195-2a9a-4be4-8199-16903da18935" (UID: "4a476195-2a9a-4be4-8199-16903da18935"). InnerVolumeSpecName "kube-api-access-zfpvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.674130 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a476195-2a9a-4be4-8199-16903da18935" (UID: "4a476195-2a9a-4be4-8199-16903da18935"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.708896 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-config-data" (OuterVolumeSpecName: "config-data") pod "4a476195-2a9a-4be4-8199-16903da18935" (UID: "4a476195-2a9a-4be4-8199-16903da18935"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.745412 4698 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.745472 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfpvk\" (UniqueName: \"kubernetes.io/projected/4a476195-2a9a-4be4-8199-16903da18935-kube-api-access-zfpvk\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.745486 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:05 crc kubenswrapper[4698]: I1006 13:01:05.745497 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a476195-2a9a-4be4-8199-16903da18935-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:06 crc kubenswrapper[4698]: I1006 13:01:06.185874 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329261-9wv7t" event={"ID":"4a476195-2a9a-4be4-8199-16903da18935","Type":"ContainerDied","Data":"4aedab018e286b061a76e8bc7f345d81ade4ff9b1845e79d7adefef3a9ab7fb7"} Oct 06 13:01:06 crc kubenswrapper[4698]: I1006 13:01:06.185919 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aedab018e286b061a76e8bc7f345d81ade4ff9b1845e79d7adefef3a9ab7fb7" Oct 06 13:01:06 crc kubenswrapper[4698]: I1006 13:01:06.185936 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329261-9wv7t" Oct 06 13:01:06 crc kubenswrapper[4698]: I1006 13:01:06.333453 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 13:01:07 crc kubenswrapper[4698]: I1006 13:01:07.202496 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzsp" event={"ID":"c3b53dc2-0b12-432c-936c-26be408a659b","Type":"ContainerStarted","Data":"69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489"} Oct 06 13:01:07 crc kubenswrapper[4698]: I1006 13:01:07.208210 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"a20dc5de7a213c2c0934790f8ee1ee8c5e6ef20fc94b37a91dc50dbcfa8707a9"} Oct 06 13:01:08 crc kubenswrapper[4698]: I1006 13:01:08.223096 4698 generic.go:334] "Generic (PLEG): container finished" podID="c3b53dc2-0b12-432c-936c-26be408a659b" containerID="69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489" exitCode=0 Oct 06 13:01:08 crc kubenswrapper[4698]: I1006 13:01:08.223179 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzsp" event={"ID":"c3b53dc2-0b12-432c-936c-26be408a659b","Type":"ContainerDied","Data":"69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489"} Oct 06 13:01:10 crc kubenswrapper[4698]: I1006 13:01:10.250866 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzsp" event={"ID":"c3b53dc2-0b12-432c-936c-26be408a659b","Type":"ContainerStarted","Data":"176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc"} Oct 06 13:01:10 crc kubenswrapper[4698]: I1006 13:01:10.277923 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7tzsp" podStartSLOduration=4.491110973 podStartE2EDuration="8.277899657s" podCreationTimestamp="2025-10-06 13:01:02 +0000 UTC" firstStartedPulling="2025-10-06 13:01:05.159674748 +0000 UTC m=+4552.572366931" lastFinishedPulling="2025-10-06 13:01:08.946463432 +0000 UTC m=+4556.359155615" observedRunningTime="2025-10-06 13:01:10.27399029 +0000 UTC m=+4557.686682463" watchObservedRunningTime="2025-10-06 13:01:10.277899657 +0000 UTC m=+4557.690591860" Oct 06 13:01:12 crc kubenswrapper[4698]: I1006 13:01:12.573898 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:12 crc kubenswrapper[4698]: I1006 13:01:12.574687 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:13 crc kubenswrapper[4698]: I1006 13:01:13.641705 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7tzsp" podUID="c3b53dc2-0b12-432c-936c-26be408a659b" containerName="registry-server" probeResult="failure" output=< Oct 06 13:01:13 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 06 13:01:13 crc kubenswrapper[4698]: > Oct 06 13:01:14 crc kubenswrapper[4698]: E1006 13:01:14.632918 4698 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.97:38414->38.102.83.97:46695: write tcp 38.102.83.97:38414->38.102.83.97:46695: write: broken pipe Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.479879 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5th45"] Oct 06 13:01:16 crc kubenswrapper[4698]: E1006 13:01:16.480826 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a476195-2a9a-4be4-8199-16903da18935" containerName="keystone-cron" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.480845 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a476195-2a9a-4be4-8199-16903da18935" containerName="keystone-cron" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.481084 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a476195-2a9a-4be4-8199-16903da18935" containerName="keystone-cron" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.482725 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.573083 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5th45"] Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.614640 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-catalog-content\") pod \"certified-operators-5th45\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.615318 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-utilities\") pod \"certified-operators-5th45\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.615420 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqgqz\" (UniqueName: \"kubernetes.io/projected/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-kube-api-access-xqgqz\") pod \"certified-operators-5th45\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.717518 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-utilities\") pod \"certified-operators-5th45\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.717596 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqgqz\" (UniqueName: \"kubernetes.io/projected/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-kube-api-access-xqgqz\") pod \"certified-operators-5th45\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.717655 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-catalog-content\") pod \"certified-operators-5th45\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.718156 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-catalog-content\") pod \"certified-operators-5th45\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.718370 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-utilities\") pod \"certified-operators-5th45\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.744375 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqgqz\" (UniqueName: \"kubernetes.io/projected/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-kube-api-access-xqgqz\") pod \"certified-operators-5th45\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:16 crc kubenswrapper[4698]: I1006 13:01:16.847847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:17 crc kubenswrapper[4698]: I1006 13:01:17.365816 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5th45"] Oct 06 13:01:17 crc kubenswrapper[4698]: W1006 13:01:17.373792 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaff07567_eb02_4d2f_8e25_f3eb8a3749a9.slice/crio-f4ebb641700ac66f19f0d3feb5cd513beb42c7bc55caaf2e8f963cfe0322a576 WatchSource:0}: Error finding container f4ebb641700ac66f19f0d3feb5cd513beb42c7bc55caaf2e8f963cfe0322a576: Status 404 returned error can't find the container with id f4ebb641700ac66f19f0d3feb5cd513beb42c7bc55caaf2e8f963cfe0322a576 Oct 06 13:01:18 crc kubenswrapper[4698]: I1006 13:01:18.352972 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5th45" event={"ID":"aff07567-eb02-4d2f-8e25-f3eb8a3749a9","Type":"ContainerStarted","Data":"f4ebb641700ac66f19f0d3feb5cd513beb42c7bc55caaf2e8f963cfe0322a576"} Oct 06 13:01:18 crc kubenswrapper[4698]: I1006 13:01:18.876654 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pnbkz"] Oct 06 13:01:18 crc kubenswrapper[4698]: I1006 13:01:18.879726 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:18 crc kubenswrapper[4698]: I1006 13:01:18.894865 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnbkz"] Oct 06 13:01:18 crc kubenswrapper[4698]: I1006 13:01:18.962324 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-catalog-content\") pod \"redhat-marketplace-pnbkz\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:18 crc kubenswrapper[4698]: I1006 13:01:18.962395 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbc75\" (UniqueName: \"kubernetes.io/projected/74101774-28e6-49fd-b05f-71137d1318a9-kube-api-access-nbc75\") pod \"redhat-marketplace-pnbkz\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:18 crc kubenswrapper[4698]: I1006 13:01:18.962489 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-utilities\") pod \"redhat-marketplace-pnbkz\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:19 crc kubenswrapper[4698]: I1006 13:01:19.064338 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-catalog-content\") pod \"redhat-marketplace-pnbkz\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:19 crc kubenswrapper[4698]: I1006 13:01:19.064389 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbc75\" (UniqueName: \"kubernetes.io/projected/74101774-28e6-49fd-b05f-71137d1318a9-kube-api-access-nbc75\") pod \"redhat-marketplace-pnbkz\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:19 crc kubenswrapper[4698]: I1006 13:01:19.064443 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-utilities\") pod \"redhat-marketplace-pnbkz\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:19 crc kubenswrapper[4698]: I1006 13:01:19.064858 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-catalog-content\") pod \"redhat-marketplace-pnbkz\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:19 crc kubenswrapper[4698]: I1006 13:01:19.064864 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-utilities\") pod \"redhat-marketplace-pnbkz\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:19 crc kubenswrapper[4698]: I1006 13:01:19.095300 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbc75\" (UniqueName: \"kubernetes.io/projected/74101774-28e6-49fd-b05f-71137d1318a9-kube-api-access-nbc75\") pod \"redhat-marketplace-pnbkz\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:19 crc kubenswrapper[4698]: I1006 13:01:19.207008 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:19 crc kubenswrapper[4698]: I1006 13:01:19.372902 4698 generic.go:334] "Generic (PLEG): container finished" podID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerID="7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366" exitCode=0 Oct 06 13:01:19 crc kubenswrapper[4698]: I1006 13:01:19.373333 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5th45" event={"ID":"aff07567-eb02-4d2f-8e25-f3eb8a3749a9","Type":"ContainerDied","Data":"7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366"} Oct 06 13:01:19 crc kubenswrapper[4698]: I1006 13:01:19.711902 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnbkz"] Oct 06 13:01:20 crc kubenswrapper[4698]: I1006 13:01:20.402710 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnbkz" event={"ID":"74101774-28e6-49fd-b05f-71137d1318a9","Type":"ContainerStarted","Data":"bb48dcaee7afcc55638f1d5d8dece7bb92365c012e7922e0501823685424fe87"} Oct 06 13:01:21 crc kubenswrapper[4698]: I1006 13:01:21.414113 4698 generic.go:334] "Generic (PLEG): container finished" podID="74101774-28e6-49fd-b05f-71137d1318a9" containerID="345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45" exitCode=0 Oct 06 13:01:21 crc kubenswrapper[4698]: I1006 13:01:21.414178 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnbkz" event={"ID":"74101774-28e6-49fd-b05f-71137d1318a9","Type":"ContainerDied","Data":"345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45"} Oct 06 13:01:21 crc kubenswrapper[4698]: I1006 13:01:21.417848 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5th45" event={"ID":"aff07567-eb02-4d2f-8e25-f3eb8a3749a9","Type":"ContainerStarted","Data":"665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac"} Oct 06 13:01:22 crc kubenswrapper[4698]: I1006 13:01:22.428228 4698 generic.go:334] "Generic (PLEG): container finished" podID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerID="665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac" exitCode=0 Oct 06 13:01:22 crc kubenswrapper[4698]: I1006 13:01:22.428298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5th45" event={"ID":"aff07567-eb02-4d2f-8e25-f3eb8a3749a9","Type":"ContainerDied","Data":"665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac"} Oct 06 13:01:22 crc kubenswrapper[4698]: I1006 13:01:22.624565 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:22 crc kubenswrapper[4698]: I1006 13:01:22.682463 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:23 crc kubenswrapper[4698]: I1006 13:01:23.444274 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnbkz" event={"ID":"74101774-28e6-49fd-b05f-71137d1318a9","Type":"ContainerStarted","Data":"f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86"} Oct 06 13:01:24 crc kubenswrapper[4698]: I1006 13:01:24.464291 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5th45" event={"ID":"aff07567-eb02-4d2f-8e25-f3eb8a3749a9","Type":"ContainerStarted","Data":"bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040"} Oct 06 13:01:24 crc kubenswrapper[4698]: I1006 13:01:24.470993 4698 generic.go:334] "Generic (PLEG): container finished" podID="74101774-28e6-49fd-b05f-71137d1318a9" containerID="f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86" exitCode=0 Oct 06 13:01:24 crc kubenswrapper[4698]: I1006 13:01:24.471056 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnbkz" event={"ID":"74101774-28e6-49fd-b05f-71137d1318a9","Type":"ContainerDied","Data":"f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86"} Oct 06 13:01:24 crc kubenswrapper[4698]: I1006 13:01:24.497825 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5th45" podStartSLOduration=4.503197188 podStartE2EDuration="8.497800998s" podCreationTimestamp="2025-10-06 13:01:16 +0000 UTC" firstStartedPulling="2025-10-06 13:01:19.375217043 +0000 UTC m=+4566.787909216" lastFinishedPulling="2025-10-06 13:01:23.369820853 +0000 UTC m=+4570.782513026" observedRunningTime="2025-10-06 13:01:24.484658444 +0000 UTC m=+4571.897350617" watchObservedRunningTime="2025-10-06 13:01:24.497800998 +0000 UTC m=+4571.910493181" Oct 06 13:01:25 crc kubenswrapper[4698]: I1006 13:01:25.471926 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7tzsp"] Oct 06 13:01:25 crc kubenswrapper[4698]: I1006 13:01:25.472583 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7tzsp" podUID="c3b53dc2-0b12-432c-936c-26be408a659b" containerName="registry-server" containerID="cri-o://176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc" gracePeriod=2 Oct 06 13:01:25 crc kubenswrapper[4698]: I1006 13:01:25.488346 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnbkz" event={"ID":"74101774-28e6-49fd-b05f-71137d1318a9","Type":"ContainerStarted","Data":"6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2"} Oct 06 13:01:25 crc kubenswrapper[4698]: I1006 13:01:25.512556 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pnbkz" podStartSLOduration=3.871680704 podStartE2EDuration="7.512529593s" podCreationTimestamp="2025-10-06 13:01:18 +0000 UTC" firstStartedPulling="2025-10-06 13:01:21.416002414 +0000 UTC m=+4568.828694617" lastFinishedPulling="2025-10-06 13:01:25.056851333 +0000 UTC m=+4572.469543506" observedRunningTime="2025-10-06 13:01:25.510486563 +0000 UTC m=+4572.923178756" watchObservedRunningTime="2025-10-06 13:01:25.512529593 +0000 UTC m=+4572.925221796" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.489994 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.499775 4698 generic.go:334] "Generic (PLEG): container finished" podID="c3b53dc2-0b12-432c-936c-26be408a659b" containerID="176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc" exitCode=0 Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.499859 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tzsp" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.499862 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzsp" event={"ID":"c3b53dc2-0b12-432c-936c-26be408a659b","Type":"ContainerDied","Data":"176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc"} Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.499940 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzsp" event={"ID":"c3b53dc2-0b12-432c-936c-26be408a659b","Type":"ContainerDied","Data":"82604f99e8545a9fc7728da623539e6722b3b8f8189c4fa66b024cbe0bc10b21"} Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.499964 4698 scope.go:117] "RemoveContainer" containerID="176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.549710 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-utilities\") pod \"c3b53dc2-0b12-432c-936c-26be408a659b\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.550440 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znvjl\" (UniqueName: \"kubernetes.io/projected/c3b53dc2-0b12-432c-936c-26be408a659b-kube-api-access-znvjl\") pod \"c3b53dc2-0b12-432c-936c-26be408a659b\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.550527 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-catalog-content\") pod \"c3b53dc2-0b12-432c-936c-26be408a659b\" (UID: \"c3b53dc2-0b12-432c-936c-26be408a659b\") " Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.553978 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-utilities" (OuterVolumeSpecName: "utilities") pod "c3b53dc2-0b12-432c-936c-26be408a659b" (UID: "c3b53dc2-0b12-432c-936c-26be408a659b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.556342 4698 scope.go:117] "RemoveContainer" containerID="69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.567939 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b53dc2-0b12-432c-936c-26be408a659b-kube-api-access-znvjl" (OuterVolumeSpecName: "kube-api-access-znvjl") pod "c3b53dc2-0b12-432c-936c-26be408a659b" (UID: "c3b53dc2-0b12-432c-936c-26be408a659b"). InnerVolumeSpecName "kube-api-access-znvjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.654662 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znvjl\" (UniqueName: \"kubernetes.io/projected/c3b53dc2-0b12-432c-936c-26be408a659b-kube-api-access-znvjl\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.654699 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.669873 4698 scope.go:117] "RemoveContainer" containerID="1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.697935 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3b53dc2-0b12-432c-936c-26be408a659b" (UID: "c3b53dc2-0b12-432c-936c-26be408a659b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.720678 4698 scope.go:117] "RemoveContainer" containerID="176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc" Oct 06 13:01:26 crc kubenswrapper[4698]: E1006 13:01:26.721025 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc\": container with ID starting with 176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc not found: ID does not exist" containerID="176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.721073 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc"} err="failed to get container status \"176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc\": rpc error: code = NotFound desc = could not find container \"176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc\": container with ID starting with 176f259ff7bbaa2fe64a4be81929a0afe81bb64f5754e8f9351446cfec096dbc not found: ID does not exist" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.721102 4698 scope.go:117] "RemoveContainer" containerID="69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489" Oct 06 13:01:26 crc kubenswrapper[4698]: E1006 13:01:26.721363 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489\": container with ID starting with 69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489 not found: ID does not exist" containerID="69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.721395 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489"} err="failed to get container status \"69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489\": rpc error: code = NotFound desc = could not find container \"69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489\": container with ID starting with 69de2ed96499161ffe2d819185d9752246a209dc77514e59777e95d6348ab489 not found: ID does not exist" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.721417 4698 scope.go:117] "RemoveContainer" containerID="1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482" Oct 06 13:01:26 crc kubenswrapper[4698]: E1006 13:01:26.721660 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482\": container with ID starting with 1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482 not found: ID does not exist" containerID="1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.721698 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482"} err="failed to get container status \"1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482\": rpc error: code = NotFound desc = could not find container \"1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482\": container with ID starting with 1161c442255fa9c6b5c4cb9f7fe90ff00c77ef9c53dadcb0fad27894c0baa482 not found: ID does not exist" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.756430 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b53dc2-0b12-432c-936c-26be408a659b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.836570 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7tzsp"] Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.845850 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7tzsp"] Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.848204 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:26 crc kubenswrapper[4698]: I1006 13:01:26.848554 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:27 crc kubenswrapper[4698]: I1006 13:01:27.343624 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b53dc2-0b12-432c-936c-26be408a659b" path="/var/lib/kubelet/pods/c3b53dc2-0b12-432c-936c-26be408a659b/volumes" Oct 06 13:01:27 crc kubenswrapper[4698]: I1006 13:01:27.899870 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5th45" podUID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerName="registry-server" probeResult="failure" output=< Oct 06 13:01:27 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 06 13:01:27 crc kubenswrapper[4698]: > Oct 06 13:01:29 crc kubenswrapper[4698]: I1006 13:01:29.207374 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:29 crc kubenswrapper[4698]: I1006 13:01:29.207966 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:29 crc kubenswrapper[4698]: I1006 13:01:29.277920 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:36 crc kubenswrapper[4698]: I1006 13:01:36.907646 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:36 crc kubenswrapper[4698]: I1006 13:01:36.962786 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:37 crc kubenswrapper[4698]: I1006 13:01:37.157246 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5th45"] Oct 06 13:01:38 crc kubenswrapper[4698]: I1006 13:01:38.653752 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5th45" podUID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerName="registry-server" containerID="cri-o://bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040" gracePeriod=2 Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.280702 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.381292 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.507290 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqgqz\" (UniqueName: \"kubernetes.io/projected/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-kube-api-access-xqgqz\") pod \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.507537 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-utilities\") pod \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.507758 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-catalog-content\") pod \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\" (UID: \"aff07567-eb02-4d2f-8e25-f3eb8a3749a9\") " Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.509100 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-utilities" (OuterVolumeSpecName: "utilities") pod "aff07567-eb02-4d2f-8e25-f3eb8a3749a9" (UID: "aff07567-eb02-4d2f-8e25-f3eb8a3749a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.518715 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-kube-api-access-xqgqz" (OuterVolumeSpecName: "kube-api-access-xqgqz") pod "aff07567-eb02-4d2f-8e25-f3eb8a3749a9" (UID: "aff07567-eb02-4d2f-8e25-f3eb8a3749a9"). InnerVolumeSpecName "kube-api-access-xqgqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.556135 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnbkz"] Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.571734 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aff07567-eb02-4d2f-8e25-f3eb8a3749a9" (UID: "aff07567-eb02-4d2f-8e25-f3eb8a3749a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.610662 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.610704 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqgqz\" (UniqueName: \"kubernetes.io/projected/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-kube-api-access-xqgqz\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.610717 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff07567-eb02-4d2f-8e25-f3eb8a3749a9-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.696900 4698 generic.go:334] "Generic (PLEG): container finished" podID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerID="bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040" exitCode=0 Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.697214 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pnbkz" podUID="74101774-28e6-49fd-b05f-71137d1318a9" containerName="registry-server" containerID="cri-o://6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2" gracePeriod=2 Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.697585 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5th45" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.699784 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5th45" event={"ID":"aff07567-eb02-4d2f-8e25-f3eb8a3749a9","Type":"ContainerDied","Data":"bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040"} Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.699831 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5th45" event={"ID":"aff07567-eb02-4d2f-8e25-f3eb8a3749a9","Type":"ContainerDied","Data":"f4ebb641700ac66f19f0d3feb5cd513beb42c7bc55caaf2e8f963cfe0322a576"} Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.699870 4698 scope.go:117] "RemoveContainer" containerID="bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.725859 4698 scope.go:117] "RemoveContainer" containerID="665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.742623 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5th45"] Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.752041 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5th45"] Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.757359 4698 scope.go:117] "RemoveContainer" containerID="7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.940201 4698 scope.go:117] "RemoveContainer" containerID="bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040" Oct 06 13:01:39 crc kubenswrapper[4698]: E1006 13:01:39.940795 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040\": container with ID starting with bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040 not found: ID does not exist" containerID="bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.940852 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040"} err="failed to get container status \"bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040\": rpc error: code = NotFound desc = could not find container \"bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040\": container with ID starting with bf3345a618e098d35aeefd62365991640b7f218bc078402f3c73879259cd2040 not found: ID does not exist" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.940886 4698 scope.go:117] "RemoveContainer" containerID="665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac" Oct 06 13:01:39 crc kubenswrapper[4698]: E1006 13:01:39.941745 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac\": container with ID starting with 665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac not found: ID does not exist" containerID="665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.941790 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac"} err="failed to get container status \"665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac\": rpc error: code = NotFound desc = could not find container \"665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac\": container with ID starting with 665e38d4c3b473b66784b0a7277d60e6544aee4c2cf5c69d61c6719fba7465ac not found: ID does not exist" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.941806 4698 scope.go:117] "RemoveContainer" containerID="7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366" Oct 06 13:01:39 crc kubenswrapper[4698]: E1006 13:01:39.942787 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366\": container with ID starting with 7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366 not found: ID does not exist" containerID="7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366" Oct 06 13:01:39 crc kubenswrapper[4698]: I1006 13:01:39.942841 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366"} err="failed to get container status \"7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366\": rpc error: code = NotFound desc = could not find container \"7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366\": container with ID starting with 7cf21e570f7398f5597c606eb58a26caafcd62995691ccd9ca97dd5ce5e55366 not found: ID does not exist" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.327195 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.427922 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbc75\" (UniqueName: \"kubernetes.io/projected/74101774-28e6-49fd-b05f-71137d1318a9-kube-api-access-nbc75\") pod \"74101774-28e6-49fd-b05f-71137d1318a9\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.428083 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-utilities\") pod \"74101774-28e6-49fd-b05f-71137d1318a9\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.428131 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-catalog-content\") pod \"74101774-28e6-49fd-b05f-71137d1318a9\" (UID: \"74101774-28e6-49fd-b05f-71137d1318a9\") " Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.429218 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-utilities" (OuterVolumeSpecName: "utilities") pod "74101774-28e6-49fd-b05f-71137d1318a9" (UID: "74101774-28e6-49fd-b05f-71137d1318a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.429815 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.434946 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74101774-28e6-49fd-b05f-71137d1318a9-kube-api-access-nbc75" (OuterVolumeSpecName: "kube-api-access-nbc75") pod "74101774-28e6-49fd-b05f-71137d1318a9" (UID: "74101774-28e6-49fd-b05f-71137d1318a9"). InnerVolumeSpecName "kube-api-access-nbc75". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.447746 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74101774-28e6-49fd-b05f-71137d1318a9" (UID: "74101774-28e6-49fd-b05f-71137d1318a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.532866 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74101774-28e6-49fd-b05f-71137d1318a9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.532906 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbc75\" (UniqueName: \"kubernetes.io/projected/74101774-28e6-49fd-b05f-71137d1318a9-kube-api-access-nbc75\") on node \"crc\" DevicePath \"\"" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.711319 4698 generic.go:334] "Generic (PLEG): container finished" podID="74101774-28e6-49fd-b05f-71137d1318a9" containerID="6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2" exitCode=0 Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.711378 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnbkz" event={"ID":"74101774-28e6-49fd-b05f-71137d1318a9","Type":"ContainerDied","Data":"6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2"} Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.711416 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnbkz" event={"ID":"74101774-28e6-49fd-b05f-71137d1318a9","Type":"ContainerDied","Data":"bb48dcaee7afcc55638f1d5d8dece7bb92365c012e7922e0501823685424fe87"} Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.711447 4698 scope.go:117] "RemoveContainer" containerID="6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.711621 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnbkz" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.743368 4698 scope.go:117] "RemoveContainer" containerID="f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.752801 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnbkz"] Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.764082 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnbkz"] Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.781754 4698 scope.go:117] "RemoveContainer" containerID="345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.806841 4698 scope.go:117] "RemoveContainer" containerID="6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2" Oct 06 13:01:40 crc kubenswrapper[4698]: E1006 13:01:40.807626 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2\": container with ID starting with 6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2 not found: ID does not exist" containerID="6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.807679 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2"} err="failed to get container status \"6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2\": rpc error: code = NotFound desc = could not find container \"6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2\": container with ID starting with 6844c73147ee5468001e5fd1a37d35b2138e71741bb12697204260dd79843da2 not found: ID does not exist" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.807710 4698 scope.go:117] "RemoveContainer" containerID="f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86" Oct 06 13:01:40 crc kubenswrapper[4698]: E1006 13:01:40.808200 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86\": container with ID starting with f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86 not found: ID does not exist" containerID="f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.808234 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86"} err="failed to get container status \"f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86\": rpc error: code = NotFound desc = could not find container \"f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86\": container with ID starting with f8d66a02eec4866aaf63744bc0bc986a071ef98b067ac41651d49cafc11f2b86 not found: ID does not exist" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.808256 4698 scope.go:117] "RemoveContainer" containerID="345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45" Oct 06 13:01:40 crc kubenswrapper[4698]: E1006 13:01:40.808656 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45\": container with ID starting with 345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45 not found: ID does not exist" containerID="345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45" Oct 06 13:01:40 crc kubenswrapper[4698]: I1006 13:01:40.808681 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45"} err="failed to get container status \"345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45\": rpc error: code = NotFound desc = could not find container \"345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45\": container with ID starting with 345b3e8251867609b3fe69ed126f920a9c79454b63d3cf7cb345cdd33b0bff45 not found: ID does not exist" Oct 06 13:01:41 crc kubenswrapper[4698]: I1006 13:01:41.343104 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74101774-28e6-49fd-b05f-71137d1318a9" path="/var/lib/kubelet/pods/74101774-28e6-49fd-b05f-71137d1318a9/volumes" Oct 06 13:01:41 crc kubenswrapper[4698]: I1006 13:01:41.344059 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" path="/var/lib/kubelet/pods/aff07567-eb02-4d2f-8e25-f3eb8a3749a9/volumes" Oct 06 13:03:25 crc kubenswrapper[4698]: I1006 13:03:25.235307 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:03:25 crc kubenswrapper[4698]: I1006 13:03:25.235823 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:03:55 crc kubenswrapper[4698]: I1006 13:03:55.235530 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:03:55 crc kubenswrapper[4698]: I1006 13:03:55.236273 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:04:25 crc kubenswrapper[4698]: I1006 13:04:25.235520 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:04:25 crc kubenswrapper[4698]: I1006 13:04:25.236051 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:04:25 crc kubenswrapper[4698]: I1006 13:04:25.236103 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 13:04:25 crc kubenswrapper[4698]: I1006 13:04:25.236984 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a20dc5de7a213c2c0934790f8ee1ee8c5e6ef20fc94b37a91dc50dbcfa8707a9"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:04:25 crc kubenswrapper[4698]: I1006 13:04:25.237310 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://a20dc5de7a213c2c0934790f8ee1ee8c5e6ef20fc94b37a91dc50dbcfa8707a9" gracePeriod=600 Oct 06 13:04:26 crc kubenswrapper[4698]: I1006 13:04:26.722198 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="a20dc5de7a213c2c0934790f8ee1ee8c5e6ef20fc94b37a91dc50dbcfa8707a9" exitCode=0 Oct 06 13:04:26 crc kubenswrapper[4698]: I1006 13:04:26.722815 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"a20dc5de7a213c2c0934790f8ee1ee8c5e6ef20fc94b37a91dc50dbcfa8707a9"} Oct 06 13:04:26 crc kubenswrapper[4698]: I1006 13:04:26.722849 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24"} Oct 06 13:04:26 crc kubenswrapper[4698]: I1006 13:04:26.722871 4698 scope.go:117] "RemoveContainer" containerID="fe8f589aba515913605e0161400d11d7cb8f0b91637d6e33ebab6e99070afc84" Oct 06 13:06:55 crc kubenswrapper[4698]: I1006 13:06:55.235318 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:06:55 crc kubenswrapper[4698]: I1006 13:06:55.235931 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:07:25 crc kubenswrapper[4698]: I1006 13:07:25.235463 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:07:25 crc kubenswrapper[4698]: I1006 13:07:25.236146 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:07:55 crc kubenswrapper[4698]: I1006 13:07:55.235726 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:07:55 crc kubenswrapper[4698]: I1006 13:07:55.236429 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:07:55 crc kubenswrapper[4698]: I1006 13:07:55.236486 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 13:07:55 crc kubenswrapper[4698]: I1006 13:07:55.237420 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:07:55 crc kubenswrapper[4698]: I1006 13:07:55.237492 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" gracePeriod=600 Oct 06 13:07:55 crc kubenswrapper[4698]: E1006 13:07:55.466190 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:07:56 crc kubenswrapper[4698]: I1006 13:07:56.111729 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" exitCode=0 Oct 06 13:07:56 crc kubenswrapper[4698]: I1006 13:07:56.111806 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24"} Oct 06 13:07:56 crc kubenswrapper[4698]: I1006 13:07:56.112604 4698 scope.go:117] "RemoveContainer" containerID="a20dc5de7a213c2c0934790f8ee1ee8c5e6ef20fc94b37a91dc50dbcfa8707a9" Oct 06 13:07:56 crc kubenswrapper[4698]: I1006 13:07:56.113393 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:07:56 crc kubenswrapper[4698]: E1006 13:07:56.113722 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.897195 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hn8xw"] Oct 06 13:07:57 crc kubenswrapper[4698]: E1006 13:07:57.898203 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74101774-28e6-49fd-b05f-71137d1318a9" containerName="registry-server" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898230 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="74101774-28e6-49fd-b05f-71137d1318a9" containerName="registry-server" Oct 06 13:07:57 crc kubenswrapper[4698]: E1006 13:07:57.898263 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b53dc2-0b12-432c-936c-26be408a659b" containerName="registry-server" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898275 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b53dc2-0b12-432c-936c-26be408a659b" containerName="registry-server" Oct 06 13:07:57 crc kubenswrapper[4698]: E1006 13:07:57.898295 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerName="extract-content" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898305 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerName="extract-content" Oct 06 13:07:57 crc kubenswrapper[4698]: E1006 13:07:57.898331 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74101774-28e6-49fd-b05f-71137d1318a9" containerName="extract-content" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898341 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="74101774-28e6-49fd-b05f-71137d1318a9" containerName="extract-content" Oct 06 13:07:57 crc kubenswrapper[4698]: E1006 13:07:57.898385 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b53dc2-0b12-432c-936c-26be408a659b" containerName="extract-utilities" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898396 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b53dc2-0b12-432c-936c-26be408a659b" containerName="extract-utilities" Oct 06 13:07:57 crc kubenswrapper[4698]: E1006 13:07:57.898414 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74101774-28e6-49fd-b05f-71137d1318a9" containerName="extract-utilities" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898425 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="74101774-28e6-49fd-b05f-71137d1318a9" containerName="extract-utilities" Oct 06 13:07:57 crc kubenswrapper[4698]: E1006 13:07:57.898443 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerName="extract-utilities" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898454 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerName="extract-utilities" Oct 06 13:07:57 crc kubenswrapper[4698]: E1006 13:07:57.898477 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerName="registry-server" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898487 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerName="registry-server" Oct 06 13:07:57 crc kubenswrapper[4698]: E1006 13:07:57.898507 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b53dc2-0b12-432c-936c-26be408a659b" containerName="extract-content" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898517 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b53dc2-0b12-432c-936c-26be408a659b" containerName="extract-content" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898815 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff07567-eb02-4d2f-8e25-f3eb8a3749a9" containerName="registry-server" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898851 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b53dc2-0b12-432c-936c-26be408a659b" containerName="registry-server" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.898873 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="74101774-28e6-49fd-b05f-71137d1318a9" containerName="registry-server" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.901583 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:57 crc kubenswrapper[4698]: I1006 13:07:57.910423 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn8xw"] Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.003721 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wkn\" (UniqueName: \"kubernetes.io/projected/edd100aa-cb36-4aaa-a75d-024ba4926b2f-kube-api-access-r7wkn\") pod \"community-operators-hn8xw\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.003903 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-utilities\") pod \"community-operators-hn8xw\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.004122 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-catalog-content\") pod \"community-operators-hn8xw\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.105866 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-utilities\") pod \"community-operators-hn8xw\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.106037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-catalog-content\") pod \"community-operators-hn8xw\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.106121 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wkn\" (UniqueName: \"kubernetes.io/projected/edd100aa-cb36-4aaa-a75d-024ba4926b2f-kube-api-access-r7wkn\") pod \"community-operators-hn8xw\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.106517 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-utilities\") pod \"community-operators-hn8xw\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.106917 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-catalog-content\") pod \"community-operators-hn8xw\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.131464 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wkn\" (UniqueName: \"kubernetes.io/projected/edd100aa-cb36-4aaa-a75d-024ba4926b2f-kube-api-access-r7wkn\") pod \"community-operators-hn8xw\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.254737 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:07:58 crc kubenswrapper[4698]: I1006 13:07:58.816335 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn8xw"] Oct 06 13:07:59 crc kubenswrapper[4698]: I1006 13:07:59.170764 4698 generic.go:334] "Generic (PLEG): container finished" podID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerID="6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871" exitCode=0 Oct 06 13:07:59 crc kubenswrapper[4698]: I1006 13:07:59.170857 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8xw" event={"ID":"edd100aa-cb36-4aaa-a75d-024ba4926b2f","Type":"ContainerDied","Data":"6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871"} Oct 06 13:07:59 crc kubenswrapper[4698]: I1006 13:07:59.170907 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8xw" event={"ID":"edd100aa-cb36-4aaa-a75d-024ba4926b2f","Type":"ContainerStarted","Data":"a3f5d29d938623642b0d7d0e6208d05ab0fa649a141c807dfe5a4dee019560d0"} Oct 06 13:07:59 crc kubenswrapper[4698]: I1006 13:07:59.176248 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 13:08:01 crc kubenswrapper[4698]: I1006 13:08:01.198498 4698 generic.go:334] "Generic (PLEG): container finished" podID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerID="d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a" exitCode=0 Oct 06 13:08:01 crc kubenswrapper[4698]: I1006 13:08:01.199466 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8xw" event={"ID":"edd100aa-cb36-4aaa-a75d-024ba4926b2f","Type":"ContainerDied","Data":"d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a"} Oct 06 13:08:01 crc kubenswrapper[4698]: E1006 13:08:01.237568 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedd100aa_cb36_4aaa_a75d_024ba4926b2f.slice/crio-conmon-d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedd100aa_cb36_4aaa_a75d_024ba4926b2f.slice/crio-d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a.scope\": RecentStats: unable to find data in memory cache]" Oct 06 13:08:02 crc kubenswrapper[4698]: I1006 13:08:02.218054 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8xw" event={"ID":"edd100aa-cb36-4aaa-a75d-024ba4926b2f","Type":"ContainerStarted","Data":"b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f"} Oct 06 13:08:03 crc kubenswrapper[4698]: I1006 13:08:03.264132 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hn8xw" podStartSLOduration=3.553926748 podStartE2EDuration="6.264114133s" podCreationTimestamp="2025-10-06 13:07:57 +0000 UTC" firstStartedPulling="2025-10-06 13:07:59.175750073 +0000 UTC m=+4966.588442256" lastFinishedPulling="2025-10-06 13:08:01.885937458 +0000 UTC m=+4969.298629641" observedRunningTime="2025-10-06 13:08:03.256712691 +0000 UTC m=+4970.669404894" watchObservedRunningTime="2025-10-06 13:08:03.264114133 +0000 UTC m=+4970.676806306" Oct 06 13:08:07 crc kubenswrapper[4698]: I1006 13:08:07.330740 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:08:07 crc kubenswrapper[4698]: E1006 13:08:07.332194 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:08:08 crc kubenswrapper[4698]: I1006 13:08:08.255445 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:08:08 crc kubenswrapper[4698]: I1006 13:08:08.256220 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:08:08 crc kubenswrapper[4698]: I1006 13:08:08.331116 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:08:08 crc kubenswrapper[4698]: I1006 13:08:08.381347 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:08:08 crc kubenswrapper[4698]: I1006 13:08:08.583591 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hn8xw"] Oct 06 13:08:10 crc kubenswrapper[4698]: I1006 13:08:10.352654 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hn8xw" podUID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerName="registry-server" containerID="cri-o://b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f" gracePeriod=2 Oct 06 13:08:10 crc kubenswrapper[4698]: I1006 13:08:10.923672 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.024288 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-utilities\") pod \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.024389 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7wkn\" (UniqueName: \"kubernetes.io/projected/edd100aa-cb36-4aaa-a75d-024ba4926b2f-kube-api-access-r7wkn\") pod \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.024629 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-catalog-content\") pod \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\" (UID: \"edd100aa-cb36-4aaa-a75d-024ba4926b2f\") " Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.025596 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-utilities" (OuterVolumeSpecName: "utilities") pod "edd100aa-cb36-4aaa-a75d-024ba4926b2f" (UID: "edd100aa-cb36-4aaa-a75d-024ba4926b2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.031045 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd100aa-cb36-4aaa-a75d-024ba4926b2f-kube-api-access-r7wkn" (OuterVolumeSpecName: "kube-api-access-r7wkn") pod "edd100aa-cb36-4aaa-a75d-024ba4926b2f" (UID: "edd100aa-cb36-4aaa-a75d-024ba4926b2f"). InnerVolumeSpecName "kube-api-access-r7wkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.127196 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.127240 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7wkn\" (UniqueName: \"kubernetes.io/projected/edd100aa-cb36-4aaa-a75d-024ba4926b2f-kube-api-access-r7wkn\") on node \"crc\" DevicePath \"\"" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.304757 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edd100aa-cb36-4aaa-a75d-024ba4926b2f" (UID: "edd100aa-cb36-4aaa-a75d-024ba4926b2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.330650 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd100aa-cb36-4aaa-a75d-024ba4926b2f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.364640 4698 generic.go:334] "Generic (PLEG): container finished" podID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerID="b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f" exitCode=0 Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.364690 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8xw" event={"ID":"edd100aa-cb36-4aaa-a75d-024ba4926b2f","Type":"ContainerDied","Data":"b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f"} Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.364729 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8xw" event={"ID":"edd100aa-cb36-4aaa-a75d-024ba4926b2f","Type":"ContainerDied","Data":"a3f5d29d938623642b0d7d0e6208d05ab0fa649a141c807dfe5a4dee019560d0"} Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.364755 4698 scope.go:117] "RemoveContainer" containerID="b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.364752 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn8xw" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.399641 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hn8xw"] Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.408423 4698 scope.go:117] "RemoveContainer" containerID="d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.410846 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hn8xw"] Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.435124 4698 scope.go:117] "RemoveContainer" containerID="6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.478567 4698 scope.go:117] "RemoveContainer" containerID="b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f" Oct 06 13:08:11 crc kubenswrapper[4698]: E1006 13:08:11.479148 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f\": container with ID starting with b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f not found: ID does not exist" containerID="b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.479212 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f"} err="failed to get container status \"b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f\": rpc error: code = NotFound desc = could not find container \"b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f\": container with ID starting with b549ef7bd4467d45d0ae58ef8d5ea0c3d617c5d237d3db00d63c5ba8c17d153f not found: ID does not exist" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.479253 4698 scope.go:117] "RemoveContainer" containerID="d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a" Oct 06 13:08:11 crc kubenswrapper[4698]: E1006 13:08:11.479661 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a\": container with ID starting with d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a not found: ID does not exist" containerID="d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.479721 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a"} err="failed to get container status \"d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a\": rpc error: code = NotFound desc = could not find container \"d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a\": container with ID starting with d4e05a01b97805f9d9625780d2d3b52edc7e7594c9eac2e78768aa398e16ff2a not found: ID does not exist" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.479762 4698 scope.go:117] "RemoveContainer" containerID="6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871" Oct 06 13:08:11 crc kubenswrapper[4698]: E1006 13:08:11.480142 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871\": container with ID starting with 6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871 not found: ID does not exist" containerID="6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871" Oct 06 13:08:11 crc kubenswrapper[4698]: I1006 13:08:11.480190 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871"} err="failed to get container status \"6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871\": rpc error: code = NotFound desc = could not find container \"6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871\": container with ID starting with 6947a2df1f2bd6a678f0b7b975acc87bcd68ba48b5b199dcd18471a1f07bd871 not found: ID does not exist" Oct 06 13:08:11 crc kubenswrapper[4698]: E1006 13:08:11.518782 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedd100aa_cb36_4aaa_a75d_024ba4926b2f.slice/crio-a3f5d29d938623642b0d7d0e6208d05ab0fa649a141c807dfe5a4dee019560d0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedd100aa_cb36_4aaa_a75d_024ba4926b2f.slice\": RecentStats: unable to find data in memory cache]" Oct 06 13:08:13 crc kubenswrapper[4698]: I1006 13:08:13.342771 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" path="/var/lib/kubelet/pods/edd100aa-cb36-4aaa-a75d-024ba4926b2f/volumes" Oct 06 13:08:21 crc kubenswrapper[4698]: I1006 13:08:21.329597 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:08:21 crc kubenswrapper[4698]: E1006 13:08:21.331971 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:08:35 crc kubenswrapper[4698]: I1006 13:08:35.331305 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:08:35 crc kubenswrapper[4698]: E1006 13:08:35.332883 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:08:50 crc kubenswrapper[4698]: I1006 13:08:50.329781 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:08:50 crc kubenswrapper[4698]: E1006 13:08:50.330683 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:09:03 crc kubenswrapper[4698]: I1006 13:09:03.335460 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:09:03 crc kubenswrapper[4698]: E1006 13:09:03.336531 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:09:14 crc kubenswrapper[4698]: I1006 13:09:14.328855 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:09:14 crc kubenswrapper[4698]: E1006 13:09:14.329784 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:09:27 crc kubenswrapper[4698]: I1006 13:09:27.329202 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:09:27 crc kubenswrapper[4698]: E1006 13:09:27.330325 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:09:42 crc kubenswrapper[4698]: I1006 13:09:42.329472 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:09:42 crc kubenswrapper[4698]: E1006 13:09:42.330632 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:09:57 crc kubenswrapper[4698]: I1006 13:09:57.330171 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:09:57 crc kubenswrapper[4698]: E1006 13:09:57.331553 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:10:09 crc kubenswrapper[4698]: I1006 13:10:09.330522 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:10:09 crc kubenswrapper[4698]: E1006 13:10:09.331985 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:10:21 crc kubenswrapper[4698]: I1006 13:10:21.329544 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:10:21 crc kubenswrapper[4698]: E1006 13:10:21.330732 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:10:32 crc kubenswrapper[4698]: I1006 13:10:32.329594 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:10:32 crc kubenswrapper[4698]: E1006 13:10:32.330574 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:10:46 crc kubenswrapper[4698]: I1006 13:10:46.329191 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:10:46 crc kubenswrapper[4698]: E1006 13:10:46.330266 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:11:00 crc kubenswrapper[4698]: I1006 13:11:00.328954 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:11:00 crc kubenswrapper[4698]: E1006 13:11:00.329808 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.597507 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h6wqf"] Oct 06 13:11:06 crc kubenswrapper[4698]: E1006 13:11:06.598217 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerName="extract-content" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.598230 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerName="extract-content" Oct 06 13:11:06 crc kubenswrapper[4698]: E1006 13:11:06.598264 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerName="extract-utilities" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.598270 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerName="extract-utilities" Oct 06 13:11:06 crc kubenswrapper[4698]: E1006 13:11:06.598287 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerName="registry-server" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.598293 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerName="registry-server" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.598504 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd100aa-cb36-4aaa-a75d-024ba4926b2f" containerName="registry-server" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.599961 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.646793 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6wqf"] Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.652647 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-catalog-content\") pod \"redhat-operators-h6wqf\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.652874 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-utilities\") pod \"redhat-operators-h6wqf\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.653025 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66pb9\" (UniqueName: \"kubernetes.io/projected/a8374697-93ba-4b33-b56b-c42bd2d4b400-kube-api-access-66pb9\") pod \"redhat-operators-h6wqf\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.754880 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-catalog-content\") pod \"redhat-operators-h6wqf\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.755233 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-utilities\") pod \"redhat-operators-h6wqf\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.755289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66pb9\" (UniqueName: \"kubernetes.io/projected/a8374697-93ba-4b33-b56b-c42bd2d4b400-kube-api-access-66pb9\") pod \"redhat-operators-h6wqf\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.755430 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-catalog-content\") pod \"redhat-operators-h6wqf\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.755670 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-utilities\") pod \"redhat-operators-h6wqf\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.780314 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66pb9\" (UniqueName: \"kubernetes.io/projected/a8374697-93ba-4b33-b56b-c42bd2d4b400-kube-api-access-66pb9\") pod \"redhat-operators-h6wqf\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:06 crc kubenswrapper[4698]: I1006 13:11:06.961483 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:07 crc kubenswrapper[4698]: I1006 13:11:07.482812 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6wqf"] Oct 06 13:11:08 crc kubenswrapper[4698]: I1006 13:11:08.322069 4698 generic.go:334] "Generic (PLEG): container finished" podID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerID="7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79" exitCode=0 Oct 06 13:11:08 crc kubenswrapper[4698]: I1006 13:11:08.322116 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6wqf" event={"ID":"a8374697-93ba-4b33-b56b-c42bd2d4b400","Type":"ContainerDied","Data":"7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79"} Oct 06 13:11:08 crc kubenswrapper[4698]: I1006 13:11:08.322382 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6wqf" event={"ID":"a8374697-93ba-4b33-b56b-c42bd2d4b400","Type":"ContainerStarted","Data":"64da979c81b161f4cc7c948c3a5077ccd702ff7cc8b25ce19a4d58551196696e"} Oct 06 13:11:10 crc kubenswrapper[4698]: I1006 13:11:10.349575 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6wqf" event={"ID":"a8374697-93ba-4b33-b56b-c42bd2d4b400","Type":"ContainerStarted","Data":"22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36"} Oct 06 13:11:13 crc kubenswrapper[4698]: I1006 13:11:13.361770 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:11:13 crc kubenswrapper[4698]: E1006 13:11:13.378541 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:11:16 crc kubenswrapper[4698]: I1006 13:11:16.418547 4698 generic.go:334] "Generic (PLEG): container finished" podID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerID="22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36" exitCode=0 Oct 06 13:11:16 crc kubenswrapper[4698]: I1006 13:11:16.418641 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6wqf" event={"ID":"a8374697-93ba-4b33-b56b-c42bd2d4b400","Type":"ContainerDied","Data":"22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36"} Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.296828 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8lwcn"] Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.299830 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.321354 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lwcn"] Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.400109 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-utilities\") pod \"certified-operators-8lwcn\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.400224 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-catalog-content\") pod \"certified-operators-8lwcn\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.400640 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/023a966c-43ce-470c-819b-caf2608e9963-kube-api-access-hf792\") pod \"certified-operators-8lwcn\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.448722 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6wqf" event={"ID":"a8374697-93ba-4b33-b56b-c42bd2d4b400","Type":"ContainerStarted","Data":"0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b"} Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.490132 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h6wqf" podStartSLOduration=3.3827349030000002 podStartE2EDuration="12.490110955s" podCreationTimestamp="2025-10-06 13:11:06 +0000 UTC" firstStartedPulling="2025-10-06 13:11:08.323990908 +0000 UTC m=+5155.736683101" lastFinishedPulling="2025-10-06 13:11:17.43136697 +0000 UTC m=+5164.844059153" observedRunningTime="2025-10-06 13:11:18.485746067 +0000 UTC m=+5165.898438240" watchObservedRunningTime="2025-10-06 13:11:18.490110955 +0000 UTC m=+5165.902803128" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.502990 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/023a966c-43ce-470c-819b-caf2608e9963-kube-api-access-hf792\") pod \"certified-operators-8lwcn\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.503177 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-utilities\") pod \"certified-operators-8lwcn\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.503204 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-catalog-content\") pod \"certified-operators-8lwcn\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.503724 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-utilities\") pod \"certified-operators-8lwcn\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.503817 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-catalog-content\") pod \"certified-operators-8lwcn\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.533307 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/023a966c-43ce-470c-819b-caf2608e9963-kube-api-access-hf792\") pod \"certified-operators-8lwcn\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:18 crc kubenswrapper[4698]: I1006 13:11:18.655192 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:19 crc kubenswrapper[4698]: I1006 13:11:19.161058 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lwcn"] Oct 06 13:11:19 crc kubenswrapper[4698]: W1006 13:11:19.166031 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod023a966c_43ce_470c_819b_caf2608e9963.slice/crio-efcaedea0d6ca944fad1f231dd039d5988b1a8ef6249a9cf494723bb0dbb9dda WatchSource:0}: Error finding container efcaedea0d6ca944fad1f231dd039d5988b1a8ef6249a9cf494723bb0dbb9dda: Status 404 returned error can't find the container with id efcaedea0d6ca944fad1f231dd039d5988b1a8ef6249a9cf494723bb0dbb9dda Oct 06 13:11:19 crc kubenswrapper[4698]: I1006 13:11:19.462612 4698 generic.go:334] "Generic (PLEG): container finished" podID="023a966c-43ce-470c-819b-caf2608e9963" containerID="4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5" exitCode=0 Oct 06 13:11:19 crc kubenswrapper[4698]: I1006 13:11:19.462685 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lwcn" event={"ID":"023a966c-43ce-470c-819b-caf2608e9963","Type":"ContainerDied","Data":"4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5"} Oct 06 13:11:19 crc kubenswrapper[4698]: I1006 13:11:19.462726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lwcn" event={"ID":"023a966c-43ce-470c-819b-caf2608e9963","Type":"ContainerStarted","Data":"efcaedea0d6ca944fad1f231dd039d5988b1a8ef6249a9cf494723bb0dbb9dda"} Oct 06 13:11:20 crc kubenswrapper[4698]: I1006 13:11:20.473782 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lwcn" event={"ID":"023a966c-43ce-470c-819b-caf2608e9963","Type":"ContainerStarted","Data":"5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a"} Oct 06 13:11:22 crc kubenswrapper[4698]: I1006 13:11:22.494838 4698 generic.go:334] "Generic (PLEG): container finished" podID="023a966c-43ce-470c-819b-caf2608e9963" containerID="5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a" exitCode=0 Oct 06 13:11:22 crc kubenswrapper[4698]: I1006 13:11:22.495003 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lwcn" event={"ID":"023a966c-43ce-470c-819b-caf2608e9963","Type":"ContainerDied","Data":"5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a"} Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.101106 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxp7"] Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.104862 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.119576 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxp7"] Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.300876 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl2dt\" (UniqueName: \"kubernetes.io/projected/84e200c6-75ca-425a-be56-509cbf97c119-kube-api-access-xl2dt\") pod \"redhat-marketplace-6jxp7\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.301328 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-catalog-content\") pod \"redhat-marketplace-6jxp7\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.301466 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-utilities\") pod \"redhat-marketplace-6jxp7\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.403265 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-catalog-content\") pod \"redhat-marketplace-6jxp7\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.403364 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-utilities\") pod \"redhat-marketplace-6jxp7\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.403518 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl2dt\" (UniqueName: \"kubernetes.io/projected/84e200c6-75ca-425a-be56-509cbf97c119-kube-api-access-xl2dt\") pod \"redhat-marketplace-6jxp7\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.403809 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-catalog-content\") pod \"redhat-marketplace-6jxp7\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.403882 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-utilities\") pod \"redhat-marketplace-6jxp7\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.615168 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl2dt\" (UniqueName: \"kubernetes.io/projected/84e200c6-75ca-425a-be56-509cbf97c119-kube-api-access-xl2dt\") pod \"redhat-marketplace-6jxp7\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:23 crc kubenswrapper[4698]: I1006 13:11:23.733463 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:24 crc kubenswrapper[4698]: I1006 13:11:24.234937 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxp7"] Oct 06 13:11:24 crc kubenswrapper[4698]: I1006 13:11:24.528279 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lwcn" event={"ID":"023a966c-43ce-470c-819b-caf2608e9963","Type":"ContainerStarted","Data":"80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31"} Oct 06 13:11:24 crc kubenswrapper[4698]: I1006 13:11:24.531472 4698 generic.go:334] "Generic (PLEG): container finished" podID="84e200c6-75ca-425a-be56-509cbf97c119" containerID="23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1" exitCode=0 Oct 06 13:11:24 crc kubenswrapper[4698]: I1006 13:11:24.531516 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxp7" event={"ID":"84e200c6-75ca-425a-be56-509cbf97c119","Type":"ContainerDied","Data":"23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1"} Oct 06 13:11:24 crc kubenswrapper[4698]: I1006 13:11:24.531543 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxp7" event={"ID":"84e200c6-75ca-425a-be56-509cbf97c119","Type":"ContainerStarted","Data":"ac06f4b3e521c16f6ada55fb559451da4485181b3378bee3ecdfdcebbef301f2"} Oct 06 13:11:24 crc kubenswrapper[4698]: I1006 13:11:24.565539 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8lwcn" podStartSLOduration=2.9862665489999998 podStartE2EDuration="6.56551396s" podCreationTimestamp="2025-10-06 13:11:18 +0000 UTC" firstStartedPulling="2025-10-06 13:11:19.464365385 +0000 UTC m=+5166.877057558" lastFinishedPulling="2025-10-06 13:11:23.043612796 +0000 UTC m=+5170.456304969" observedRunningTime="2025-10-06 13:11:24.554299313 +0000 UTC m=+5171.966991526" watchObservedRunningTime="2025-10-06 13:11:24.56551396 +0000 UTC m=+5171.978206153" Oct 06 13:11:26 crc kubenswrapper[4698]: I1006 13:11:26.567904 4698 generic.go:334] "Generic (PLEG): container finished" podID="84e200c6-75ca-425a-be56-509cbf97c119" containerID="94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e" exitCode=0 Oct 06 13:11:26 crc kubenswrapper[4698]: I1006 13:11:26.568229 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxp7" event={"ID":"84e200c6-75ca-425a-be56-509cbf97c119","Type":"ContainerDied","Data":"94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e"} Oct 06 13:11:26 crc kubenswrapper[4698]: I1006 13:11:26.961584 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:26 crc kubenswrapper[4698]: I1006 13:11:26.961896 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:27 crc kubenswrapper[4698]: I1006 13:11:27.329278 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:11:27 crc kubenswrapper[4698]: E1006 13:11:27.329646 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:11:27 crc kubenswrapper[4698]: I1006 13:11:27.580331 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxp7" event={"ID":"84e200c6-75ca-425a-be56-509cbf97c119","Type":"ContainerStarted","Data":"f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008"} Oct 06 13:11:27 crc kubenswrapper[4698]: I1006 13:11:27.609366 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6jxp7" podStartSLOduration=1.877832696 podStartE2EDuration="4.609344929s" podCreationTimestamp="2025-10-06 13:11:23 +0000 UTC" firstStartedPulling="2025-10-06 13:11:24.532776602 +0000 UTC m=+5171.945468775" lastFinishedPulling="2025-10-06 13:11:27.264288835 +0000 UTC m=+5174.676981008" observedRunningTime="2025-10-06 13:11:27.600752007 +0000 UTC m=+5175.013444190" watchObservedRunningTime="2025-10-06 13:11:27.609344929 +0000 UTC m=+5175.022037112" Oct 06 13:11:28 crc kubenswrapper[4698]: I1006 13:11:28.031322 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h6wqf" podUID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerName="registry-server" probeResult="failure" output=< Oct 06 13:11:28 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 06 13:11:28 crc kubenswrapper[4698]: > Oct 06 13:11:28 crc kubenswrapper[4698]: I1006 13:11:28.656125 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:28 crc kubenswrapper[4698]: I1006 13:11:28.656177 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:28 crc kubenswrapper[4698]: I1006 13:11:28.726261 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:29 crc kubenswrapper[4698]: I1006 13:11:29.658283 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:31 crc kubenswrapper[4698]: I1006 13:11:31.686721 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lwcn"] Oct 06 13:11:31 crc kubenswrapper[4698]: I1006 13:11:31.687383 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8lwcn" podUID="023a966c-43ce-470c-819b-caf2608e9963" containerName="registry-server" containerID="cri-o://80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31" gracePeriod=2 Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.235074 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.398763 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-utilities\") pod \"023a966c-43ce-470c-819b-caf2608e9963\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.399027 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/023a966c-43ce-470c-819b-caf2608e9963-kube-api-access-hf792\") pod \"023a966c-43ce-470c-819b-caf2608e9963\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.399085 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-catalog-content\") pod \"023a966c-43ce-470c-819b-caf2608e9963\" (UID: \"023a966c-43ce-470c-819b-caf2608e9963\") " Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.400428 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-utilities" (OuterVolumeSpecName: "utilities") pod "023a966c-43ce-470c-819b-caf2608e9963" (UID: "023a966c-43ce-470c-819b-caf2608e9963"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.405554 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023a966c-43ce-470c-819b-caf2608e9963-kube-api-access-hf792" (OuterVolumeSpecName: "kube-api-access-hf792") pod "023a966c-43ce-470c-819b-caf2608e9963" (UID: "023a966c-43ce-470c-819b-caf2608e9963"). InnerVolumeSpecName "kube-api-access-hf792". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.447784 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "023a966c-43ce-470c-819b-caf2608e9963" (UID: "023a966c-43ce-470c-819b-caf2608e9963"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.500888 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/023a966c-43ce-470c-819b-caf2608e9963-kube-api-access-hf792\") on node \"crc\" DevicePath \"\"" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.500929 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.500941 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023a966c-43ce-470c-819b-caf2608e9963-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.632449 4698 generic.go:334] "Generic (PLEG): container finished" podID="023a966c-43ce-470c-819b-caf2608e9963" containerID="80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31" exitCode=0 Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.632498 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lwcn" event={"ID":"023a966c-43ce-470c-819b-caf2608e9963","Type":"ContainerDied","Data":"80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31"} Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.632533 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lwcn" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.632556 4698 scope.go:117] "RemoveContainer" containerID="80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.632540 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lwcn" event={"ID":"023a966c-43ce-470c-819b-caf2608e9963","Type":"ContainerDied","Data":"efcaedea0d6ca944fad1f231dd039d5988b1a8ef6249a9cf494723bb0dbb9dda"} Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.656649 4698 scope.go:117] "RemoveContainer" containerID="5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.669551 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lwcn"] Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.686697 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8lwcn"] Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.697487 4698 scope.go:117] "RemoveContainer" containerID="4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.747699 4698 scope.go:117] "RemoveContainer" containerID="80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31" Oct 06 13:11:32 crc kubenswrapper[4698]: E1006 13:11:32.748480 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31\": container with ID starting with 80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31 not found: ID does not exist" containerID="80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.748531 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31"} err="failed to get container status \"80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31\": rpc error: code = NotFound desc = could not find container \"80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31\": container with ID starting with 80f2d7aba248261fca9305ef1e6909b7555297f4ce8839bb1481acb1dd87cd31 not found: ID does not exist" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.748557 4698 scope.go:117] "RemoveContainer" containerID="5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a" Oct 06 13:11:32 crc kubenswrapper[4698]: E1006 13:11:32.748914 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a\": container with ID starting with 5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a not found: ID does not exist" containerID="5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.748940 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a"} err="failed to get container status \"5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a\": rpc error: code = NotFound desc = could not find container \"5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a\": container with ID starting with 5d8c8f4aff01236610dd70a805db26088dc25d659b64782af898372b3696a28a not found: ID does not exist" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.748952 4698 scope.go:117] "RemoveContainer" containerID="4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5" Oct 06 13:11:32 crc kubenswrapper[4698]: E1006 13:11:32.749270 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5\": container with ID starting with 4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5 not found: ID does not exist" containerID="4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5" Oct 06 13:11:32 crc kubenswrapper[4698]: I1006 13:11:32.749298 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5"} err="failed to get container status \"4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5\": rpc error: code = NotFound desc = could not find container \"4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5\": container with ID starting with 4f9a692e4aa48a5cc77e19585a4b02ccdc35142903c71916855c284b96043be5 not found: ID does not exist" Oct 06 13:11:33 crc kubenswrapper[4698]: I1006 13:11:33.348783 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023a966c-43ce-470c-819b-caf2608e9963" path="/var/lib/kubelet/pods/023a966c-43ce-470c-819b-caf2608e9963/volumes" Oct 06 13:11:33 crc kubenswrapper[4698]: I1006 13:11:33.733723 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:33 crc kubenswrapper[4698]: I1006 13:11:33.733767 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:34 crc kubenswrapper[4698]: I1006 13:11:34.082531 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:34 crc kubenswrapper[4698]: I1006 13:11:34.744963 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:35 crc kubenswrapper[4698]: I1006 13:11:35.893801 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxp7"] Oct 06 13:11:36 crc kubenswrapper[4698]: I1006 13:11:36.680045 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6jxp7" podUID="84e200c6-75ca-425a-be56-509cbf97c119" containerName="registry-server" containerID="cri-o://f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008" gracePeriod=2 Oct 06 13:11:36 crc kubenswrapper[4698]: E1006 13:11:36.854166 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e200c6_75ca_425a_be56_509cbf97c119.slice/crio-f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008.scope\": RecentStats: unable to find data in memory cache]" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.041592 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.100414 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.241425 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.330137 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl2dt\" (UniqueName: \"kubernetes.io/projected/84e200c6-75ca-425a-be56-509cbf97c119-kube-api-access-xl2dt\") pod \"84e200c6-75ca-425a-be56-509cbf97c119\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.330213 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-utilities\") pod \"84e200c6-75ca-425a-be56-509cbf97c119\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.330503 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-catalog-content\") pod \"84e200c6-75ca-425a-be56-509cbf97c119\" (UID: \"84e200c6-75ca-425a-be56-509cbf97c119\") " Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.332110 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-utilities" (OuterVolumeSpecName: "utilities") pod "84e200c6-75ca-425a-be56-509cbf97c119" (UID: "84e200c6-75ca-425a-be56-509cbf97c119"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.341396 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e200c6-75ca-425a-be56-509cbf97c119-kube-api-access-xl2dt" (OuterVolumeSpecName: "kube-api-access-xl2dt") pod "84e200c6-75ca-425a-be56-509cbf97c119" (UID: "84e200c6-75ca-425a-be56-509cbf97c119"). InnerVolumeSpecName "kube-api-access-xl2dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.350832 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84e200c6-75ca-425a-be56-509cbf97c119" (UID: "84e200c6-75ca-425a-be56-509cbf97c119"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.432589 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.432614 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl2dt\" (UniqueName: \"kubernetes.io/projected/84e200c6-75ca-425a-be56-509cbf97c119-kube-api-access-xl2dt\") on node \"crc\" DevicePath \"\"" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.432626 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e200c6-75ca-425a-be56-509cbf97c119-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.701663 4698 generic.go:334] "Generic (PLEG): container finished" podID="84e200c6-75ca-425a-be56-509cbf97c119" containerID="f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008" exitCode=0 Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.701746 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxp7" event={"ID":"84e200c6-75ca-425a-be56-509cbf97c119","Type":"ContainerDied","Data":"f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008"} Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.701831 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxp7" event={"ID":"84e200c6-75ca-425a-be56-509cbf97c119","Type":"ContainerDied","Data":"ac06f4b3e521c16f6ada55fb559451da4485181b3378bee3ecdfdcebbef301f2"} Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.701869 4698 scope.go:117] "RemoveContainer" containerID="f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.701764 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxp7" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.730083 4698 scope.go:117] "RemoveContainer" containerID="94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.740767 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxp7"] Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.752972 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxp7"] Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.767769 4698 scope.go:117] "RemoveContainer" containerID="23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.816294 4698 scope.go:117] "RemoveContainer" containerID="f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008" Oct 06 13:11:37 crc kubenswrapper[4698]: E1006 13:11:37.816910 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008\": container with ID starting with f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008 not found: ID does not exist" containerID="f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.816963 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008"} err="failed to get container status \"f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008\": rpc error: code = NotFound desc = could not find container \"f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008\": container with ID starting with f44fca61ffaa2dc356edc9e45f170706956c6c765fb24bd76e3a392fffd33008 not found: ID does not exist" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.816996 4698 scope.go:117] "RemoveContainer" containerID="94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e" Oct 06 13:11:37 crc kubenswrapper[4698]: E1006 13:11:37.817475 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e\": container with ID starting with 94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e not found: ID does not exist" containerID="94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.817504 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e"} err="failed to get container status \"94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e\": rpc error: code = NotFound desc = could not find container \"94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e\": container with ID starting with 94d1493b75680105910575cad1ae6b4b9842cf49106c6cc07dd3a5d589d3c00e not found: ID does not exist" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.817524 4698 scope.go:117] "RemoveContainer" containerID="23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1" Oct 06 13:11:37 crc kubenswrapper[4698]: E1006 13:11:37.818235 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1\": container with ID starting with 23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1 not found: ID does not exist" containerID="23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1" Oct 06 13:11:37 crc kubenswrapper[4698]: I1006 13:11:37.818280 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1"} err="failed to get container status \"23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1\": rpc error: code = NotFound desc = could not find container \"23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1\": container with ID starting with 23653e4c0c880155e52070b6ab5600b8bac57c4e76491984acf26f3f3ca031f1 not found: ID does not exist" Oct 06 13:11:38 crc kubenswrapper[4698]: I1006 13:11:38.895996 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6wqf"] Oct 06 13:11:38 crc kubenswrapper[4698]: I1006 13:11:38.896723 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h6wqf" podUID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerName="registry-server" containerID="cri-o://0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b" gracePeriod=2 Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.346593 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e200c6-75ca-425a-be56-509cbf97c119" path="/var/lib/kubelet/pods/84e200c6-75ca-425a-be56-509cbf97c119/volumes" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.427681 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.581303 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-utilities\") pod \"a8374697-93ba-4b33-b56b-c42bd2d4b400\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.581593 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-catalog-content\") pod \"a8374697-93ba-4b33-b56b-c42bd2d4b400\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.581669 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66pb9\" (UniqueName: \"kubernetes.io/projected/a8374697-93ba-4b33-b56b-c42bd2d4b400-kube-api-access-66pb9\") pod \"a8374697-93ba-4b33-b56b-c42bd2d4b400\" (UID: \"a8374697-93ba-4b33-b56b-c42bd2d4b400\") " Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.582744 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-utilities" (OuterVolumeSpecName: "utilities") pod "a8374697-93ba-4b33-b56b-c42bd2d4b400" (UID: "a8374697-93ba-4b33-b56b-c42bd2d4b400"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.588580 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8374697-93ba-4b33-b56b-c42bd2d4b400-kube-api-access-66pb9" (OuterVolumeSpecName: "kube-api-access-66pb9") pod "a8374697-93ba-4b33-b56b-c42bd2d4b400" (UID: "a8374697-93ba-4b33-b56b-c42bd2d4b400"). InnerVolumeSpecName "kube-api-access-66pb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.660631 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8374697-93ba-4b33-b56b-c42bd2d4b400" (UID: "a8374697-93ba-4b33-b56b-c42bd2d4b400"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.684186 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66pb9\" (UniqueName: \"kubernetes.io/projected/a8374697-93ba-4b33-b56b-c42bd2d4b400-kube-api-access-66pb9\") on node \"crc\" DevicePath \"\"" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.684227 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.684239 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8374697-93ba-4b33-b56b-c42bd2d4b400-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.728768 4698 generic.go:334] "Generic (PLEG): container finished" podID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerID="0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b" exitCode=0 Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.728856 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6wqf" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.728867 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6wqf" event={"ID":"a8374697-93ba-4b33-b56b-c42bd2d4b400","Type":"ContainerDied","Data":"0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b"} Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.729206 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6wqf" event={"ID":"a8374697-93ba-4b33-b56b-c42bd2d4b400","Type":"ContainerDied","Data":"64da979c81b161f4cc7c948c3a5077ccd702ff7cc8b25ce19a4d58551196696e"} Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.729240 4698 scope.go:117] "RemoveContainer" containerID="0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.753263 4698 scope.go:117] "RemoveContainer" containerID="22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.792641 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6wqf"] Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.798293 4698 scope.go:117] "RemoveContainer" containerID="7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.807416 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h6wqf"] Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.845709 4698 scope.go:117] "RemoveContainer" containerID="0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b" Oct 06 13:11:39 crc kubenswrapper[4698]: E1006 13:11:39.846227 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b\": container with ID starting with 0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b not found: ID does not exist" containerID="0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.846274 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b"} err="failed to get container status \"0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b\": rpc error: code = NotFound desc = could not find container \"0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b\": container with ID starting with 0fb700969e42cf70060cc28229639e07126dc440f947c85c81483ae55d98856b not found: ID does not exist" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.846302 4698 scope.go:117] "RemoveContainer" containerID="22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36" Oct 06 13:11:39 crc kubenswrapper[4698]: E1006 13:11:39.846998 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36\": container with ID starting with 22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36 not found: ID does not exist" containerID="22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.847067 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36"} err="failed to get container status \"22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36\": rpc error: code = NotFound desc = could not find container \"22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36\": container with ID starting with 22b63dbc36fc227a1b8360f9aefe881a8f9f2d5c2e6381fc2014d199e8468e36 not found: ID does not exist" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.847089 4698 scope.go:117] "RemoveContainer" containerID="7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79" Oct 06 13:11:39 crc kubenswrapper[4698]: E1006 13:11:39.847536 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79\": container with ID starting with 7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79 not found: ID does not exist" containerID="7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79" Oct 06 13:11:39 crc kubenswrapper[4698]: I1006 13:11:39.847590 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79"} err="failed to get container status \"7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79\": rpc error: code = NotFound desc = could not find container \"7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79\": container with ID starting with 7cb0d447ff1e74f31cbef2485c62d44a6bdda4a991f7cf98d26712d9b28acc79 not found: ID does not exist" Oct 06 13:11:41 crc kubenswrapper[4698]: I1006 13:11:41.330509 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:11:41 crc kubenswrapper[4698]: E1006 13:11:41.330904 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:11:41 crc kubenswrapper[4698]: I1006 13:11:41.348299 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8374697-93ba-4b33-b56b-c42bd2d4b400" path="/var/lib/kubelet/pods/a8374697-93ba-4b33-b56b-c42bd2d4b400/volumes" Oct 06 13:11:56 crc kubenswrapper[4698]: I1006 13:11:56.329565 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:11:56 crc kubenswrapper[4698]: E1006 13:11:56.331102 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:12:10 crc kubenswrapper[4698]: I1006 13:12:10.329800 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:12:10 crc kubenswrapper[4698]: E1006 13:12:10.330754 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:12:21 crc kubenswrapper[4698]: I1006 13:12:21.330119 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:12:21 crc kubenswrapper[4698]: E1006 13:12:21.331211 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:12:34 crc kubenswrapper[4698]: I1006 13:12:34.330253 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:12:34 crc kubenswrapper[4698]: E1006 13:12:34.334893 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:12:47 crc kubenswrapper[4698]: I1006 13:12:47.328881 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:12:47 crc kubenswrapper[4698]: E1006 13:12:47.329727 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:12:58 crc kubenswrapper[4698]: I1006 13:12:58.329904 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:12:58 crc kubenswrapper[4698]: I1006 13:12:58.672236 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"642be85d4193b56be3069aff41f44b66663c0cbf2a4ee8a34d217b3a11025470"} Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.164185 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk"] Oct 06 13:15:00 crc kubenswrapper[4698]: E1006 13:15:00.165391 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023a966c-43ce-470c-819b-caf2608e9963" containerName="extract-utilities" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165411 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="023a966c-43ce-470c-819b-caf2608e9963" containerName="extract-utilities" Oct 06 13:15:00 crc kubenswrapper[4698]: E1006 13:15:00.165423 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerName="extract-utilities" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165431 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerName="extract-utilities" Oct 06 13:15:00 crc kubenswrapper[4698]: E1006 13:15:00.165443 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e200c6-75ca-425a-be56-509cbf97c119" containerName="extract-content" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165453 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e200c6-75ca-425a-be56-509cbf97c119" containerName="extract-content" Oct 06 13:15:00 crc kubenswrapper[4698]: E1006 13:15:00.165499 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e200c6-75ca-425a-be56-509cbf97c119" containerName="extract-utilities" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165509 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e200c6-75ca-425a-be56-509cbf97c119" containerName="extract-utilities" Oct 06 13:15:00 crc kubenswrapper[4698]: E1006 13:15:00.165531 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023a966c-43ce-470c-819b-caf2608e9963" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165539 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="023a966c-43ce-470c-819b-caf2608e9963" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4698]: E1006 13:15:00.165569 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e200c6-75ca-425a-be56-509cbf97c119" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165581 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e200c6-75ca-425a-be56-509cbf97c119" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4698]: E1006 13:15:00.165602 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerName="extract-content" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165612 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerName="extract-content" Oct 06 13:15:00 crc kubenswrapper[4698]: E1006 13:15:00.165622 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165631 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4698]: E1006 13:15:00.165653 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023a966c-43ce-470c-819b-caf2608e9963" containerName="extract-content" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165666 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="023a966c-43ce-470c-819b-caf2608e9963" containerName="extract-content" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165938 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="023a966c-43ce-470c-819b-caf2608e9963" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.165979 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e200c6-75ca-425a-be56-509cbf97c119" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.166007 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8374697-93ba-4b33-b56b-c42bd2d4b400" containerName="registry-server" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.167763 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.171253 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.171415 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.184448 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3833bede-089e-4d6f-8dbc-a38e6af3f019-config-volume\") pod \"collect-profiles-29329275-jfmvk\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.184521 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3833bede-089e-4d6f-8dbc-a38e6af3f019-secret-volume\") pod \"collect-profiles-29329275-jfmvk\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.184543 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjmc\" (UniqueName: \"kubernetes.io/projected/3833bede-089e-4d6f-8dbc-a38e6af3f019-kube-api-access-zdjmc\") pod \"collect-profiles-29329275-jfmvk\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.186267 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk"] Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.287545 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3833bede-089e-4d6f-8dbc-a38e6af3f019-secret-volume\") pod \"collect-profiles-29329275-jfmvk\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.287615 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjmc\" (UniqueName: \"kubernetes.io/projected/3833bede-089e-4d6f-8dbc-a38e6af3f019-kube-api-access-zdjmc\") pod \"collect-profiles-29329275-jfmvk\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.287781 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3833bede-089e-4d6f-8dbc-a38e6af3f019-config-volume\") pod \"collect-profiles-29329275-jfmvk\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.289748 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3833bede-089e-4d6f-8dbc-a38e6af3f019-config-volume\") pod \"collect-profiles-29329275-jfmvk\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.292880 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3833bede-089e-4d6f-8dbc-a38e6af3f019-secret-volume\") pod \"collect-profiles-29329275-jfmvk\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.303855 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjmc\" (UniqueName: \"kubernetes.io/projected/3833bede-089e-4d6f-8dbc-a38e6af3f019-kube-api-access-zdjmc\") pod \"collect-profiles-29329275-jfmvk\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:00 crc kubenswrapper[4698]: I1006 13:15:00.499840 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:01 crc kubenswrapper[4698]: I1006 13:15:01.003442 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk"] Oct 06 13:15:01 crc kubenswrapper[4698]: I1006 13:15:01.048520 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" event={"ID":"3833bede-089e-4d6f-8dbc-a38e6af3f019","Type":"ContainerStarted","Data":"18f2409a7b0e7dde628f1d06f9864c1a94b113e8f5d9c0c48f8f1ffa937278ff"} Oct 06 13:15:02 crc kubenswrapper[4698]: I1006 13:15:02.070659 4698 generic.go:334] "Generic (PLEG): container finished" podID="3833bede-089e-4d6f-8dbc-a38e6af3f019" containerID="c2ae9d7f00a5193c276d924edc6d3803fbf0d87ccc50b309b73ab8d9fa067221" exitCode=0 Oct 06 13:15:02 crc kubenswrapper[4698]: I1006 13:15:02.070769 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" event={"ID":"3833bede-089e-4d6f-8dbc-a38e6af3f019","Type":"ContainerDied","Data":"c2ae9d7f00a5193c276d924edc6d3803fbf0d87ccc50b309b73ab8d9fa067221"} Oct 06 13:15:03 crc kubenswrapper[4698]: I1006 13:15:03.491680 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:03 crc kubenswrapper[4698]: I1006 13:15:03.650479 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3833bede-089e-4d6f-8dbc-a38e6af3f019-secret-volume\") pod \"3833bede-089e-4d6f-8dbc-a38e6af3f019\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " Oct 06 13:15:03 crc kubenswrapper[4698]: I1006 13:15:03.650793 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdjmc\" (UniqueName: \"kubernetes.io/projected/3833bede-089e-4d6f-8dbc-a38e6af3f019-kube-api-access-zdjmc\") pod \"3833bede-089e-4d6f-8dbc-a38e6af3f019\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " Oct 06 13:15:03 crc kubenswrapper[4698]: I1006 13:15:03.650861 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3833bede-089e-4d6f-8dbc-a38e6af3f019-config-volume\") pod \"3833bede-089e-4d6f-8dbc-a38e6af3f019\" (UID: \"3833bede-089e-4d6f-8dbc-a38e6af3f019\") " Oct 06 13:15:03 crc kubenswrapper[4698]: I1006 13:15:03.651707 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3833bede-089e-4d6f-8dbc-a38e6af3f019-config-volume" (OuterVolumeSpecName: "config-volume") pod "3833bede-089e-4d6f-8dbc-a38e6af3f019" (UID: "3833bede-089e-4d6f-8dbc-a38e6af3f019"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 13:15:03 crc kubenswrapper[4698]: I1006 13:15:03.656791 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3833bede-089e-4d6f-8dbc-a38e6af3f019-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3833bede-089e-4d6f-8dbc-a38e6af3f019" (UID: "3833bede-089e-4d6f-8dbc-a38e6af3f019"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:15:03 crc kubenswrapper[4698]: I1006 13:15:03.657480 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3833bede-089e-4d6f-8dbc-a38e6af3f019-kube-api-access-zdjmc" (OuterVolumeSpecName: "kube-api-access-zdjmc") pod "3833bede-089e-4d6f-8dbc-a38e6af3f019" (UID: "3833bede-089e-4d6f-8dbc-a38e6af3f019"). InnerVolumeSpecName "kube-api-access-zdjmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:15:03 crc kubenswrapper[4698]: I1006 13:15:03.753803 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3833bede-089e-4d6f-8dbc-a38e6af3f019-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:15:03 crc kubenswrapper[4698]: I1006 13:15:03.754169 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdjmc\" (UniqueName: \"kubernetes.io/projected/3833bede-089e-4d6f-8dbc-a38e6af3f019-kube-api-access-zdjmc\") on node \"crc\" DevicePath \"\"" Oct 06 13:15:03 crc kubenswrapper[4698]: I1006 13:15:03.754275 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3833bede-089e-4d6f-8dbc-a38e6af3f019-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:15:04 crc kubenswrapper[4698]: I1006 13:15:04.095842 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" event={"ID":"3833bede-089e-4d6f-8dbc-a38e6af3f019","Type":"ContainerDied","Data":"18f2409a7b0e7dde628f1d06f9864c1a94b113e8f5d9c0c48f8f1ffa937278ff"} Oct 06 13:15:04 crc kubenswrapper[4698]: I1006 13:15:04.096232 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18f2409a7b0e7dde628f1d06f9864c1a94b113e8f5d9c0c48f8f1ffa937278ff" Oct 06 13:15:04 crc kubenswrapper[4698]: I1006 13:15:04.095921 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329275-jfmvk" Oct 06 13:15:04 crc kubenswrapper[4698]: I1006 13:15:04.571814 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs"] Oct 06 13:15:04 crc kubenswrapper[4698]: I1006 13:15:04.580346 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329230-z7hrs"] Oct 06 13:15:05 crc kubenswrapper[4698]: I1006 13:15:05.344390 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc" path="/var/lib/kubelet/pods/3fb3d153-09a4-4a3a-a8d8-92940ba3e1fc/volumes" Oct 06 13:15:25 crc kubenswrapper[4698]: I1006 13:15:25.235341 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:15:25 crc kubenswrapper[4698]: I1006 13:15:25.235917 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:15:45 crc kubenswrapper[4698]: I1006 13:15:45.079869 4698 scope.go:117] "RemoveContainer" containerID="3475ef7db1513aeb6939d24de2f074423c6bdfbfac12119656caab1bacae111c" Oct 06 13:15:55 crc kubenswrapper[4698]: I1006 13:15:55.235455 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:15:55 crc kubenswrapper[4698]: I1006 13:15:55.235972 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:16:25 crc kubenswrapper[4698]: I1006 13:16:25.235110 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:16:25 crc kubenswrapper[4698]: I1006 13:16:25.236240 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:16:25 crc kubenswrapper[4698]: I1006 13:16:25.236301 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 13:16:25 crc kubenswrapper[4698]: I1006 13:16:25.237041 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"642be85d4193b56be3069aff41f44b66663c0cbf2a4ee8a34d217b3a11025470"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:16:25 crc kubenswrapper[4698]: I1006 13:16:25.237100 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://642be85d4193b56be3069aff41f44b66663c0cbf2a4ee8a34d217b3a11025470" gracePeriod=600 Oct 06 13:16:26 crc kubenswrapper[4698]: I1006 13:16:26.060715 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="642be85d4193b56be3069aff41f44b66663c0cbf2a4ee8a34d217b3a11025470" exitCode=0 Oct 06 13:16:26 crc kubenswrapper[4698]: I1006 13:16:26.060864 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"642be85d4193b56be3069aff41f44b66663c0cbf2a4ee8a34d217b3a11025470"} Oct 06 13:16:26 crc kubenswrapper[4698]: I1006 13:16:26.061525 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846"} Oct 06 13:16:26 crc kubenswrapper[4698]: I1006 13:16:26.061558 4698 scope.go:117] "RemoveContainer" containerID="3958acd2d5fb1a4fa1ef9e7b510a139e3beee0e46a4efb449d314d438ec67f24" Oct 06 13:18:25 crc kubenswrapper[4698]: I1006 13:18:25.235338 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:18:25 crc kubenswrapper[4698]: I1006 13:18:25.235937 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:18:55 crc kubenswrapper[4698]: I1006 13:18:55.235215 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:18:55 crc kubenswrapper[4698]: I1006 13:18:55.235922 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:19:17 crc kubenswrapper[4698]: I1006 13:19:17.761493 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kwskx"] Oct 06 13:19:17 crc kubenswrapper[4698]: E1006 13:19:17.762481 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3833bede-089e-4d6f-8dbc-a38e6af3f019" containerName="collect-profiles" Oct 06 13:19:17 crc kubenswrapper[4698]: I1006 13:19:17.762496 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3833bede-089e-4d6f-8dbc-a38e6af3f019" containerName="collect-profiles" Oct 06 13:19:17 crc kubenswrapper[4698]: I1006 13:19:17.762767 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3833bede-089e-4d6f-8dbc-a38e6af3f019" containerName="collect-profiles" Oct 06 13:19:17 crc kubenswrapper[4698]: I1006 13:19:17.764581 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:17 crc kubenswrapper[4698]: I1006 13:19:17.788831 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kwskx"] Oct 06 13:19:17 crc kubenswrapper[4698]: I1006 13:19:17.924150 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-catalog-content\") pod \"community-operators-kwskx\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:17 crc kubenswrapper[4698]: I1006 13:19:17.924245 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bn5z\" (UniqueName: \"kubernetes.io/projected/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-kube-api-access-2bn5z\") pod \"community-operators-kwskx\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:17 crc kubenswrapper[4698]: I1006 13:19:17.924317 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-utilities\") pod \"community-operators-kwskx\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:18 crc kubenswrapper[4698]: I1006 13:19:18.026620 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-catalog-content\") pod \"community-operators-kwskx\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:18 crc kubenswrapper[4698]: I1006 13:19:18.027044 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bn5z\" (UniqueName: \"kubernetes.io/projected/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-kube-api-access-2bn5z\") pod \"community-operators-kwskx\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:18 crc kubenswrapper[4698]: I1006 13:19:18.027085 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-catalog-content\") pod \"community-operators-kwskx\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:18 crc kubenswrapper[4698]: I1006 13:19:18.027153 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-utilities\") pod \"community-operators-kwskx\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:18 crc kubenswrapper[4698]: I1006 13:19:18.027455 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-utilities\") pod \"community-operators-kwskx\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:18 crc kubenswrapper[4698]: I1006 13:19:18.051118 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bn5z\" (UniqueName: \"kubernetes.io/projected/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-kube-api-access-2bn5z\") pod \"community-operators-kwskx\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:18 crc kubenswrapper[4698]: I1006 13:19:18.101084 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:18 crc kubenswrapper[4698]: I1006 13:19:18.725139 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kwskx"] Oct 06 13:19:19 crc kubenswrapper[4698]: I1006 13:19:19.170033 4698 generic.go:334] "Generic (PLEG): container finished" podID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerID="08f57b7c88170a0d863146126ee27c774d8c9f3ed9a52d9004b09cf824fb1836" exitCode=0 Oct 06 13:19:19 crc kubenswrapper[4698]: I1006 13:19:19.170103 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwskx" event={"ID":"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a","Type":"ContainerDied","Data":"08f57b7c88170a0d863146126ee27c774d8c9f3ed9a52d9004b09cf824fb1836"} Oct 06 13:19:19 crc kubenswrapper[4698]: I1006 13:19:19.170377 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwskx" event={"ID":"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a","Type":"ContainerStarted","Data":"c1da78e1ad097c0718b58036a74a6f6dd32515a78fa8724ef5c46ca48f07c847"} Oct 06 13:19:19 crc kubenswrapper[4698]: I1006 13:19:19.173047 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 13:19:20 crc kubenswrapper[4698]: I1006 13:19:20.181227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwskx" event={"ID":"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a","Type":"ContainerStarted","Data":"0fb37890707b463cbb0041fde38f775b0daa4cb211770e94d6152044f4356697"} Oct 06 13:19:21 crc kubenswrapper[4698]: I1006 13:19:21.194256 4698 generic.go:334] "Generic (PLEG): container finished" podID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerID="0fb37890707b463cbb0041fde38f775b0daa4cb211770e94d6152044f4356697" exitCode=0 Oct 06 13:19:21 crc kubenswrapper[4698]: I1006 13:19:21.194478 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwskx" event={"ID":"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a","Type":"ContainerDied","Data":"0fb37890707b463cbb0041fde38f775b0daa4cb211770e94d6152044f4356697"} Oct 06 13:19:23 crc kubenswrapper[4698]: I1006 13:19:23.220352 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwskx" event={"ID":"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a","Type":"ContainerStarted","Data":"cedc82aa78a24193b4c068cb3b32af610963efd6064f940946136bc43a09f412"} Oct 06 13:19:23 crc kubenswrapper[4698]: I1006 13:19:23.250072 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kwskx" podStartSLOduration=3.501529543 podStartE2EDuration="6.250052353s" podCreationTimestamp="2025-10-06 13:19:17 +0000 UTC" firstStartedPulling="2025-10-06 13:19:19.172583322 +0000 UTC m=+5646.585275535" lastFinishedPulling="2025-10-06 13:19:21.921106172 +0000 UTC m=+5649.333798345" observedRunningTime="2025-10-06 13:19:23.243364969 +0000 UTC m=+5650.656057162" watchObservedRunningTime="2025-10-06 13:19:23.250052353 +0000 UTC m=+5650.662744516" Oct 06 13:19:25 crc kubenswrapper[4698]: I1006 13:19:25.235493 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:19:25 crc kubenswrapper[4698]: I1006 13:19:25.235569 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:19:25 crc kubenswrapper[4698]: I1006 13:19:25.235630 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 13:19:25 crc kubenswrapper[4698]: I1006 13:19:25.236641 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:19:25 crc kubenswrapper[4698]: I1006 13:19:25.236715 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" gracePeriod=600 Oct 06 13:19:25 crc kubenswrapper[4698]: E1006 13:19:25.435269 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:19:26 crc kubenswrapper[4698]: I1006 13:19:26.268461 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" exitCode=0 Oct 06 13:19:26 crc kubenswrapper[4698]: I1006 13:19:26.268510 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846"} Oct 06 13:19:26 crc kubenswrapper[4698]: I1006 13:19:26.268545 4698 scope.go:117] "RemoveContainer" containerID="642be85d4193b56be3069aff41f44b66663c0cbf2a4ee8a34d217b3a11025470" Oct 06 13:19:26 crc kubenswrapper[4698]: I1006 13:19:26.269241 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:19:26 crc kubenswrapper[4698]: E1006 13:19:26.269507 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:19:27 crc kubenswrapper[4698]: I1006 13:19:27.293353 4698 generic.go:334] "Generic (PLEG): container finished" podID="06bf5456-72f4-4eee-a851-c943572e317b" containerID="5f388c1fb15d05514598328d361e1400198852517a8659b06e98baa8d45ed414" exitCode=0 Oct 06 13:19:27 crc kubenswrapper[4698]: I1006 13:19:27.293462 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"06bf5456-72f4-4eee-a851-c943572e317b","Type":"ContainerDied","Data":"5f388c1fb15d05514598328d361e1400198852517a8659b06e98baa8d45ed414"} Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.101950 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.102373 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.187465 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.370868 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.451131 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kwskx"] Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.728210 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.862500 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ca-certs\") pod \"06bf5456-72f4-4eee-a851-c943572e317b\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.862624 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvbrz\" (UniqueName: \"kubernetes.io/projected/06bf5456-72f4-4eee-a851-c943572e317b-kube-api-access-dvbrz\") pod \"06bf5456-72f4-4eee-a851-c943572e317b\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.862749 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ssh-key\") pod \"06bf5456-72f4-4eee-a851-c943572e317b\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.862840 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-workdir\") pod \"06bf5456-72f4-4eee-a851-c943572e317b\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.862925 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-temporary\") pod \"06bf5456-72f4-4eee-a851-c943572e317b\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.863117 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-config-data\") pod \"06bf5456-72f4-4eee-a851-c943572e317b\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.863148 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config-secret\") pod \"06bf5456-72f4-4eee-a851-c943572e317b\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.863200 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config\") pod \"06bf5456-72f4-4eee-a851-c943572e317b\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.863234 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"06bf5456-72f4-4eee-a851-c943572e317b\" (UID: \"06bf5456-72f4-4eee-a851-c943572e317b\") " Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.865362 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "06bf5456-72f4-4eee-a851-c943572e317b" (UID: "06bf5456-72f4-4eee-a851-c943572e317b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.868400 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-config-data" (OuterVolumeSpecName: "config-data") pod "06bf5456-72f4-4eee-a851-c943572e317b" (UID: "06bf5456-72f4-4eee-a851-c943572e317b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.869665 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "06bf5456-72f4-4eee-a851-c943572e317b" (UID: "06bf5456-72f4-4eee-a851-c943572e317b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.869959 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06bf5456-72f4-4eee-a851-c943572e317b-kube-api-access-dvbrz" (OuterVolumeSpecName: "kube-api-access-dvbrz") pod "06bf5456-72f4-4eee-a851-c943572e317b" (UID: "06bf5456-72f4-4eee-a851-c943572e317b"). InnerVolumeSpecName "kube-api-access-dvbrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.920065 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "06bf5456-72f4-4eee-a851-c943572e317b" (UID: "06bf5456-72f4-4eee-a851-c943572e317b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.932544 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06bf5456-72f4-4eee-a851-c943572e317b" (UID: "06bf5456-72f4-4eee-a851-c943572e317b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.939970 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "06bf5456-72f4-4eee-a851-c943572e317b" (UID: "06bf5456-72f4-4eee-a851-c943572e317b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.944028 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "06bf5456-72f4-4eee-a851-c943572e317b" (UID: "06bf5456-72f4-4eee-a851-c943572e317b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.970945 4698 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.970985 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.971000 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.971029 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06bf5456-72f4-4eee-a851-c943572e317b-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.971064 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.971076 4698 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.971089 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvbrz\" (UniqueName: \"kubernetes.io/projected/06bf5456-72f4-4eee-a851-c943572e317b-kube-api-access-dvbrz\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:28 crc kubenswrapper[4698]: I1006 13:19:28.971100 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06bf5456-72f4-4eee-a851-c943572e317b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:29 crc kubenswrapper[4698]: I1006 13:19:29.000375 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 06 13:19:29 crc kubenswrapper[4698]: I1006 13:19:29.073295 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:29 crc kubenswrapper[4698]: I1006 13:19:29.240600 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "06bf5456-72f4-4eee-a851-c943572e317b" (UID: "06bf5456-72f4-4eee-a851-c943572e317b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:19:29 crc kubenswrapper[4698]: I1006 13:19:29.277925 4698 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/06bf5456-72f4-4eee-a851-c943572e317b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:29 crc kubenswrapper[4698]: I1006 13:19:29.319265 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"06bf5456-72f4-4eee-a851-c943572e317b","Type":"ContainerDied","Data":"b27d1e49115de7e57d9b4eb73579c4023942773c7890675249eb1620cdc46007"} Oct 06 13:19:29 crc kubenswrapper[4698]: I1006 13:19:29.319322 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b27d1e49115de7e57d9b4eb73579c4023942773c7890675249eb1620cdc46007" Oct 06 13:19:29 crc kubenswrapper[4698]: I1006 13:19:29.319358 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 06 13:19:30 crc kubenswrapper[4698]: I1006 13:19:30.331949 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kwskx" podUID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerName="registry-server" containerID="cri-o://cedc82aa78a24193b4c068cb3b32af610963efd6064f940946136bc43a09f412" gracePeriod=2 Oct 06 13:19:30 crc kubenswrapper[4698]: E1006 13:19:30.568708 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd370ad0e_660b_4d9f_98b8_a9c01a4a7a6a.slice/crio-cedc82aa78a24193b4c068cb3b32af610963efd6064f940946136bc43a09f412.scope\": RecentStats: unable to find data in memory cache]" Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.343342 4698 generic.go:334] "Generic (PLEG): container finished" podID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerID="cedc82aa78a24193b4c068cb3b32af610963efd6064f940946136bc43a09f412" exitCode=0 Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.343383 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwskx" event={"ID":"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a","Type":"ContainerDied","Data":"cedc82aa78a24193b4c068cb3b32af610963efd6064f940946136bc43a09f412"} Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.343654 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwskx" event={"ID":"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a","Type":"ContainerDied","Data":"c1da78e1ad097c0718b58036a74a6f6dd32515a78fa8724ef5c46ca48f07c847"} Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.343671 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1da78e1ad097c0718b58036a74a6f6dd32515a78fa8724ef5c46ca48f07c847" Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.394122 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.528855 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-utilities\") pod \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.528925 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-catalog-content\") pod \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.528988 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bn5z\" (UniqueName: \"kubernetes.io/projected/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-kube-api-access-2bn5z\") pod \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\" (UID: \"d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a\") " Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.530135 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-utilities" (OuterVolumeSpecName: "utilities") pod "d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" (UID: "d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.537618 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-kube-api-access-2bn5z" (OuterVolumeSpecName: "kube-api-access-2bn5z") pod "d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" (UID: "d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a"). InnerVolumeSpecName "kube-api-access-2bn5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.580364 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" (UID: "d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.631169 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.631200 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:31 crc kubenswrapper[4698]: I1006 13:19:31.631214 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bn5z\" (UniqueName: \"kubernetes.io/projected/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a-kube-api-access-2bn5z\") on node \"crc\" DevicePath \"\"" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.356354 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwskx" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.417598 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kwskx"] Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.434660 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kwskx"] Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.983007 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 06 13:19:32 crc kubenswrapper[4698]: E1006 13:19:32.984299 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06bf5456-72f4-4eee-a851-c943572e317b" containerName="tempest-tests-tempest-tests-runner" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.984342 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="06bf5456-72f4-4eee-a851-c943572e317b" containerName="tempest-tests-tempest-tests-runner" Oct 06 13:19:32 crc kubenswrapper[4698]: E1006 13:19:32.984388 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerName="extract-utilities" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.984402 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerName="extract-utilities" Oct 06 13:19:32 crc kubenswrapper[4698]: E1006 13:19:32.984420 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerName="extract-content" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.984432 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerName="extract-content" Oct 06 13:19:32 crc kubenswrapper[4698]: E1006 13:19:32.984463 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerName="registry-server" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.984507 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerName="registry-server" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.984913 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="06bf5456-72f4-4eee-a851-c943572e317b" containerName="tempest-tests-tempest-tests-runner" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.984956 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" containerName="registry-server" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.986241 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.989961 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pnjbc" Oct 06 13:19:32 crc kubenswrapper[4698]: I1006 13:19:32.997455 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 06 13:19:33 crc kubenswrapper[4698]: I1006 13:19:33.067258 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wbpc\" (UniqueName: \"kubernetes.io/projected/e9b78c09-1a17-43bd-8c65-27d435435cf8-kube-api-access-4wbpc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e9b78c09-1a17-43bd-8c65-27d435435cf8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 06 13:19:33 crc kubenswrapper[4698]: I1006 13:19:33.067388 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e9b78c09-1a17-43bd-8c65-27d435435cf8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 06 13:19:33 crc kubenswrapper[4698]: I1006 13:19:33.169633 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wbpc\" (UniqueName: \"kubernetes.io/projected/e9b78c09-1a17-43bd-8c65-27d435435cf8-kube-api-access-4wbpc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e9b78c09-1a17-43bd-8c65-27d435435cf8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 06 13:19:33 crc kubenswrapper[4698]: I1006 13:19:33.169759 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e9b78c09-1a17-43bd-8c65-27d435435cf8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 06 13:19:33 crc kubenswrapper[4698]: I1006 13:19:33.170405 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e9b78c09-1a17-43bd-8c65-27d435435cf8\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 06 13:19:33 crc kubenswrapper[4698]: I1006 13:19:33.204575 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wbpc\" (UniqueName: \"kubernetes.io/projected/e9b78c09-1a17-43bd-8c65-27d435435cf8-kube-api-access-4wbpc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e9b78c09-1a17-43bd-8c65-27d435435cf8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 06 13:19:33 crc kubenswrapper[4698]: I1006 13:19:33.215133 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e9b78c09-1a17-43bd-8c65-27d435435cf8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 06 13:19:33 crc kubenswrapper[4698]: I1006 13:19:33.325660 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 06 13:19:33 crc kubenswrapper[4698]: I1006 13:19:33.348169 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a" path="/var/lib/kubelet/pods/d370ad0e-660b-4d9f-98b8-a9c01a4a7a6a/volumes" Oct 06 13:19:33 crc kubenswrapper[4698]: I1006 13:19:33.806372 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 06 13:19:33 crc kubenswrapper[4698]: W1006 13:19:33.817120 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9b78c09_1a17_43bd_8c65_27d435435cf8.slice/crio-c0248b7fa848257972b70659d9516380044a924dfee1163d5ec2fd91a39b63f2 WatchSource:0}: Error finding container c0248b7fa848257972b70659d9516380044a924dfee1163d5ec2fd91a39b63f2: Status 404 returned error can't find the container with id c0248b7fa848257972b70659d9516380044a924dfee1163d5ec2fd91a39b63f2 Oct 06 13:19:34 crc kubenswrapper[4698]: I1006 13:19:34.388354 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e9b78c09-1a17-43bd-8c65-27d435435cf8","Type":"ContainerStarted","Data":"c0248b7fa848257972b70659d9516380044a924dfee1163d5ec2fd91a39b63f2"} Oct 06 13:19:36 crc kubenswrapper[4698]: I1006 13:19:36.416531 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e9b78c09-1a17-43bd-8c65-27d435435cf8","Type":"ContainerStarted","Data":"83dc3bb958abd91869d73aee50500dc06fa75cc51cc6c3d91d5eb91cb03ad1d5"} Oct 06 13:19:36 crc kubenswrapper[4698]: I1006 13:19:36.449542 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=3.064378692 podStartE2EDuration="4.44950903s" podCreationTimestamp="2025-10-06 13:19:32 +0000 UTC" firstStartedPulling="2025-10-06 13:19:33.820007473 +0000 UTC m=+5661.232699696" lastFinishedPulling="2025-10-06 13:19:35.205137821 +0000 UTC m=+5662.617830034" observedRunningTime="2025-10-06 13:19:36.435002243 +0000 UTC m=+5663.847694446" watchObservedRunningTime="2025-10-06 13:19:36.44950903 +0000 UTC m=+5663.862201243" Oct 06 13:19:40 crc kubenswrapper[4698]: I1006 13:19:40.329660 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:19:40 crc kubenswrapper[4698]: E1006 13:19:40.330812 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.189905 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-htmrk/must-gather-c54sp"] Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.192363 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/must-gather-c54sp" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.194319 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-htmrk"/"openshift-service-ca.crt" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.194414 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-htmrk"/"default-dockercfg-jfw4x" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.194846 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-htmrk"/"kube-root-ca.crt" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.200461 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-htmrk/must-gather-c54sp"] Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.211039 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d83fa784-e87e-4c0c-a670-d2001afca26e-must-gather-output\") pod \"must-gather-c54sp\" (UID: \"d83fa784-e87e-4c0c-a670-d2001afca26e\") " pod="openshift-must-gather-htmrk/must-gather-c54sp" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.211091 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7llkj\" (UniqueName: \"kubernetes.io/projected/d83fa784-e87e-4c0c-a670-d2001afca26e-kube-api-access-7llkj\") pod \"must-gather-c54sp\" (UID: \"d83fa784-e87e-4c0c-a670-d2001afca26e\") " pod="openshift-must-gather-htmrk/must-gather-c54sp" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.312851 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d83fa784-e87e-4c0c-a670-d2001afca26e-must-gather-output\") pod \"must-gather-c54sp\" (UID: \"d83fa784-e87e-4c0c-a670-d2001afca26e\") " pod="openshift-must-gather-htmrk/must-gather-c54sp" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.312918 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7llkj\" (UniqueName: \"kubernetes.io/projected/d83fa784-e87e-4c0c-a670-d2001afca26e-kube-api-access-7llkj\") pod \"must-gather-c54sp\" (UID: \"d83fa784-e87e-4c0c-a670-d2001afca26e\") " pod="openshift-must-gather-htmrk/must-gather-c54sp" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.313431 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d83fa784-e87e-4c0c-a670-d2001afca26e-must-gather-output\") pod \"must-gather-c54sp\" (UID: \"d83fa784-e87e-4c0c-a670-d2001afca26e\") " pod="openshift-must-gather-htmrk/must-gather-c54sp" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.329864 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7llkj\" (UniqueName: \"kubernetes.io/projected/d83fa784-e87e-4c0c-a670-d2001afca26e-kube-api-access-7llkj\") pod \"must-gather-c54sp\" (UID: \"d83fa784-e87e-4c0c-a670-d2001afca26e\") " pod="openshift-must-gather-htmrk/must-gather-c54sp" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.335572 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:19:53 crc kubenswrapper[4698]: E1006 13:19:53.335838 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.509705 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/must-gather-c54sp" Oct 06 13:19:53 crc kubenswrapper[4698]: I1006 13:19:53.868648 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-htmrk/must-gather-c54sp"] Oct 06 13:19:53 crc kubenswrapper[4698]: W1006 13:19:53.875911 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd83fa784_e87e_4c0c_a670_d2001afca26e.slice/crio-fda2ce6d1b5adc8aa48b4db8baeb7ea0a42a70c34899b0b5ce77cf3c487a4a1e WatchSource:0}: Error finding container fda2ce6d1b5adc8aa48b4db8baeb7ea0a42a70c34899b0b5ce77cf3c487a4a1e: Status 404 returned error can't find the container with id fda2ce6d1b5adc8aa48b4db8baeb7ea0a42a70c34899b0b5ce77cf3c487a4a1e Oct 06 13:19:54 crc kubenswrapper[4698]: I1006 13:19:54.642626 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/must-gather-c54sp" event={"ID":"d83fa784-e87e-4c0c-a670-d2001afca26e","Type":"ContainerStarted","Data":"fda2ce6d1b5adc8aa48b4db8baeb7ea0a42a70c34899b0b5ce77cf3c487a4a1e"} Oct 06 13:19:59 crc kubenswrapper[4698]: I1006 13:19:59.702465 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/must-gather-c54sp" event={"ID":"d83fa784-e87e-4c0c-a670-d2001afca26e","Type":"ContainerStarted","Data":"4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf"} Oct 06 13:19:59 crc kubenswrapper[4698]: I1006 13:19:59.703388 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/must-gather-c54sp" event={"ID":"d83fa784-e87e-4c0c-a670-d2001afca26e","Type":"ContainerStarted","Data":"2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd"} Oct 06 13:19:59 crc kubenswrapper[4698]: I1006 13:19:59.741717 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-htmrk/must-gather-c54sp" podStartSLOduration=2.109814695 podStartE2EDuration="6.741696118s" podCreationTimestamp="2025-10-06 13:19:53 +0000 UTC" firstStartedPulling="2025-10-06 13:19:53.878563097 +0000 UTC m=+5681.291255270" lastFinishedPulling="2025-10-06 13:19:58.51044452 +0000 UTC m=+5685.923136693" observedRunningTime="2025-10-06 13:19:59.723363526 +0000 UTC m=+5687.136055709" watchObservedRunningTime="2025-10-06 13:19:59.741696118 +0000 UTC m=+5687.154388301" Oct 06 13:20:03 crc kubenswrapper[4698]: I1006 13:20:03.525716 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-htmrk/crc-debug-dljv2"] Oct 06 13:20:03 crc kubenswrapper[4698]: I1006 13:20:03.527673 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-dljv2" Oct 06 13:20:03 crc kubenswrapper[4698]: I1006 13:20:03.649050 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvfrg\" (UniqueName: \"kubernetes.io/projected/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-kube-api-access-pvfrg\") pod \"crc-debug-dljv2\" (UID: \"d0f73a38-6cb0-4c3e-889d-fa24328b8f74\") " pod="openshift-must-gather-htmrk/crc-debug-dljv2" Oct 06 13:20:03 crc kubenswrapper[4698]: I1006 13:20:03.649124 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-host\") pod \"crc-debug-dljv2\" (UID: \"d0f73a38-6cb0-4c3e-889d-fa24328b8f74\") " pod="openshift-must-gather-htmrk/crc-debug-dljv2" Oct 06 13:20:03 crc kubenswrapper[4698]: I1006 13:20:03.751039 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvfrg\" (UniqueName: \"kubernetes.io/projected/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-kube-api-access-pvfrg\") pod \"crc-debug-dljv2\" (UID: \"d0f73a38-6cb0-4c3e-889d-fa24328b8f74\") " pod="openshift-must-gather-htmrk/crc-debug-dljv2" Oct 06 13:20:03 crc kubenswrapper[4698]: I1006 13:20:03.751439 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-host\") pod \"crc-debug-dljv2\" (UID: \"d0f73a38-6cb0-4c3e-889d-fa24328b8f74\") " pod="openshift-must-gather-htmrk/crc-debug-dljv2" Oct 06 13:20:03 crc kubenswrapper[4698]: I1006 13:20:03.751522 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-host\") pod \"crc-debug-dljv2\" (UID: \"d0f73a38-6cb0-4c3e-889d-fa24328b8f74\") " pod="openshift-must-gather-htmrk/crc-debug-dljv2" Oct 06 13:20:03 crc kubenswrapper[4698]: I1006 13:20:03.771124 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvfrg\" (UniqueName: \"kubernetes.io/projected/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-kube-api-access-pvfrg\") pod \"crc-debug-dljv2\" (UID: \"d0f73a38-6cb0-4c3e-889d-fa24328b8f74\") " pod="openshift-must-gather-htmrk/crc-debug-dljv2" Oct 06 13:20:03 crc kubenswrapper[4698]: I1006 13:20:03.864040 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-dljv2" Oct 06 13:20:03 crc kubenswrapper[4698]: W1006 13:20:03.908643 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0f73a38_6cb0_4c3e_889d_fa24328b8f74.slice/crio-78cee86cfc11b14fb6a98ca1aa958ab2b8921543e3d8cea9be4c7181cbc70859 WatchSource:0}: Error finding container 78cee86cfc11b14fb6a98ca1aa958ab2b8921543e3d8cea9be4c7181cbc70859: Status 404 returned error can't find the container with id 78cee86cfc11b14fb6a98ca1aa958ab2b8921543e3d8cea9be4c7181cbc70859 Oct 06 13:20:04 crc kubenswrapper[4698]: I1006 13:20:04.329577 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:20:04 crc kubenswrapper[4698]: E1006 13:20:04.330215 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:20:04 crc kubenswrapper[4698]: I1006 13:20:04.752966 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/crc-debug-dljv2" event={"ID":"d0f73a38-6cb0-4c3e-889d-fa24328b8f74","Type":"ContainerStarted","Data":"78cee86cfc11b14fb6a98ca1aa958ab2b8921543e3d8cea9be4c7181cbc70859"} Oct 06 13:20:14 crc kubenswrapper[4698]: I1006 13:20:14.850123 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/crc-debug-dljv2" event={"ID":"d0f73a38-6cb0-4c3e-889d-fa24328b8f74","Type":"ContainerStarted","Data":"11f09922956c3a20d8fd17ef38f97200ec74e9a00d3ecf365b5340971fbe6723"} Oct 06 13:20:14 crc kubenswrapper[4698]: I1006 13:20:14.870006 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-htmrk/crc-debug-dljv2" podStartSLOduration=1.992259606 podStartE2EDuration="11.86998552s" podCreationTimestamp="2025-10-06 13:20:03 +0000 UTC" firstStartedPulling="2025-10-06 13:20:03.910672423 +0000 UTC m=+5691.323364596" lastFinishedPulling="2025-10-06 13:20:13.788398337 +0000 UTC m=+5701.201090510" observedRunningTime="2025-10-06 13:20:14.868201007 +0000 UTC m=+5702.280893180" watchObservedRunningTime="2025-10-06 13:20:14.86998552 +0000 UTC m=+5702.282677703" Oct 06 13:20:18 crc kubenswrapper[4698]: I1006 13:20:18.329374 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:20:18 crc kubenswrapper[4698]: E1006 13:20:18.330893 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:20:30 crc kubenswrapper[4698]: I1006 13:20:30.329719 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:20:30 crc kubenswrapper[4698]: E1006 13:20:30.330724 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:20:45 crc kubenswrapper[4698]: I1006 13:20:45.329306 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:20:45 crc kubenswrapper[4698]: E1006 13:20:45.330117 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:21:00 crc kubenswrapper[4698]: I1006 13:21:00.329184 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:21:00 crc kubenswrapper[4698]: E1006 13:21:00.329882 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.329572 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:21:11 crc kubenswrapper[4698]: E1006 13:21:11.331713 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.379290 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-htslh"] Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.381530 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.389229 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-htslh"] Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.432856 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-catalog-content\") pod \"redhat-operators-htslh\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.433037 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-utilities\") pod \"redhat-operators-htslh\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.433158 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n75gx\" (UniqueName: \"kubernetes.io/projected/5b90be7f-dccd-4400-b20e-2fde8b0209b9-kube-api-access-n75gx\") pod \"redhat-operators-htslh\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.538081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-utilities\") pod \"redhat-operators-htslh\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.538198 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n75gx\" (UniqueName: \"kubernetes.io/projected/5b90be7f-dccd-4400-b20e-2fde8b0209b9-kube-api-access-n75gx\") pod \"redhat-operators-htslh\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.538289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-catalog-content\") pod \"redhat-operators-htslh\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.538708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-utilities\") pod \"redhat-operators-htslh\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.538891 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-catalog-content\") pod \"redhat-operators-htslh\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.561751 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n75gx\" (UniqueName: \"kubernetes.io/projected/5b90be7f-dccd-4400-b20e-2fde8b0209b9-kube-api-access-n75gx\") pod \"redhat-operators-htslh\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:11 crc kubenswrapper[4698]: I1006 13:21:11.716619 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:12 crc kubenswrapper[4698]: I1006 13:21:12.215472 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-htslh"] Oct 06 13:21:12 crc kubenswrapper[4698]: I1006 13:21:12.399705 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htslh" event={"ID":"5b90be7f-dccd-4400-b20e-2fde8b0209b9","Type":"ContainerStarted","Data":"fb499ad56c43a4f67d001ba59e24b9f7be606ae25f2f2133f33ee3efcba0eead"} Oct 06 13:21:13 crc kubenswrapper[4698]: I1006 13:21:13.408362 4698 generic.go:334] "Generic (PLEG): container finished" podID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerID="a66a0082bd39abe4b59f7aa5d7e7353691ab503ce594e82f082c4e14cb12fb53" exitCode=0 Oct 06 13:21:13 crc kubenswrapper[4698]: I1006 13:21:13.408924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htslh" event={"ID":"5b90be7f-dccd-4400-b20e-2fde8b0209b9","Type":"ContainerDied","Data":"a66a0082bd39abe4b59f7aa5d7e7353691ab503ce594e82f082c4e14cb12fb53"} Oct 06 13:21:14 crc kubenswrapper[4698]: I1006 13:21:14.420686 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htslh" event={"ID":"5b90be7f-dccd-4400-b20e-2fde8b0209b9","Type":"ContainerStarted","Data":"1735f58a6c1b2cda1237224429a96fecc36f3176be51f1bf57c15b71f810d696"} Oct 06 13:21:18 crc kubenswrapper[4698]: I1006 13:21:18.459915 4698 generic.go:334] "Generic (PLEG): container finished" podID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerID="1735f58a6c1b2cda1237224429a96fecc36f3176be51f1bf57c15b71f810d696" exitCode=0 Oct 06 13:21:18 crc kubenswrapper[4698]: I1006 13:21:18.459997 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htslh" event={"ID":"5b90be7f-dccd-4400-b20e-2fde8b0209b9","Type":"ContainerDied","Data":"1735f58a6c1b2cda1237224429a96fecc36f3176be51f1bf57c15b71f810d696"} Oct 06 13:21:19 crc kubenswrapper[4698]: I1006 13:21:19.471796 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htslh" event={"ID":"5b90be7f-dccd-4400-b20e-2fde8b0209b9","Type":"ContainerStarted","Data":"c45822fdb56374dfd293be702dfdd9f829e270247881b564e0f8b714dd5a17f0"} Oct 06 13:21:19 crc kubenswrapper[4698]: I1006 13:21:19.494218 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-htslh" podStartSLOduration=3.007339873 podStartE2EDuration="8.49419463s" podCreationTimestamp="2025-10-06 13:21:11 +0000 UTC" firstStartedPulling="2025-10-06 13:21:13.411076152 +0000 UTC m=+5760.823768325" lastFinishedPulling="2025-10-06 13:21:18.897930909 +0000 UTC m=+5766.310623082" observedRunningTime="2025-10-06 13:21:19.488995983 +0000 UTC m=+5766.901688166" watchObservedRunningTime="2025-10-06 13:21:19.49419463 +0000 UTC m=+5766.906886803" Oct 06 13:21:21 crc kubenswrapper[4698]: I1006 13:21:21.718361 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:21 crc kubenswrapper[4698]: I1006 13:21:21.718779 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:22 crc kubenswrapper[4698]: I1006 13:21:22.329682 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:21:22 crc kubenswrapper[4698]: E1006 13:21:22.330300 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:21:22 crc kubenswrapper[4698]: I1006 13:21:22.767149 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htslh" podUID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerName="registry-server" probeResult="failure" output=< Oct 06 13:21:22 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 06 13:21:22 crc kubenswrapper[4698]: > Oct 06 13:21:24 crc kubenswrapper[4698]: I1006 13:21:24.358809 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6857b4f974-dqhrx_61b610ef-3459-4cf9-9328-d1f95d01be7a/barbican-api/0.log" Oct 06 13:21:24 crc kubenswrapper[4698]: I1006 13:21:24.409056 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6857b4f974-dqhrx_61b610ef-3459-4cf9-9328-d1f95d01be7a/barbican-api-log/0.log" Oct 06 13:21:24 crc kubenswrapper[4698]: I1006 13:21:24.590371 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-746dff6454-x5fd6_fc5e62c6-2df3-4629-831b-a2342fef2343/barbican-keystone-listener/0.log" Oct 06 13:21:24 crc kubenswrapper[4698]: I1006 13:21:24.647376 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-746dff6454-x5fd6_fc5e62c6-2df3-4629-831b-a2342fef2343/barbican-keystone-listener-log/0.log" Oct 06 13:21:24 crc kubenswrapper[4698]: I1006 13:21:24.811529 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58d8667c5c-dbf84_fa92b339-0782-432a-a352-5a0718033683/barbican-worker/0.log" Oct 06 13:21:24 crc kubenswrapper[4698]: I1006 13:21:24.840799 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58d8667c5c-dbf84_fa92b339-0782-432a-a352-5a0718033683/barbican-worker-log/0.log" Oct 06 13:21:25 crc kubenswrapper[4698]: I1006 13:21:25.062768 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-885pc_7a9dbb12-cd2b-4f3a-a602-35ae29132726/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:25 crc kubenswrapper[4698]: I1006 13:21:25.243665 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_71e624a3-d6ee-458b-be82-fcc805fbc29b/ceilometer-central-agent/0.log" Oct 06 13:21:25 crc kubenswrapper[4698]: I1006 13:21:25.271207 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_71e624a3-d6ee-458b-be82-fcc805fbc29b/ceilometer-notification-agent/0.log" Oct 06 13:21:25 crc kubenswrapper[4698]: I1006 13:21:25.342940 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_71e624a3-d6ee-458b-be82-fcc805fbc29b/proxy-httpd/0.log" Oct 06 13:21:25 crc kubenswrapper[4698]: I1006 13:21:25.434803 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_71e624a3-d6ee-458b-be82-fcc805fbc29b/sg-core/0.log" Oct 06 13:21:25 crc kubenswrapper[4698]: I1006 13:21:25.642902 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b5496d3c-491b-4f5d-8351-2e7eac348fd2/cinder-api-log/0.log" Oct 06 13:21:25 crc kubenswrapper[4698]: I1006 13:21:25.656944 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b5496d3c-491b-4f5d-8351-2e7eac348fd2/cinder-api/0.log" Oct 06 13:21:25 crc kubenswrapper[4698]: I1006 13:21:25.874393 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cd84c444-81fa-4206-8517-a25ba61c7209/probe/0.log" Oct 06 13:21:25 crc kubenswrapper[4698]: I1006 13:21:25.899131 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cd84c444-81fa-4206-8517-a25ba61c7209/cinder-scheduler/0.log" Oct 06 13:21:26 crc kubenswrapper[4698]: I1006 13:21:26.105266 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ltjln_f084d261-7f67-4be1-83b2-7e1c379e0ffe/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:26 crc kubenswrapper[4698]: I1006 13:21:26.200953 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5jclg_166970c1-3e73-47ca-b4c7-ea9c980ce7bb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:26 crc kubenswrapper[4698]: I1006 13:21:26.449027 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7_be279c30-e0a4-4828-8e13-2375265bb01f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:26 crc kubenswrapper[4698]: I1006 13:21:26.651178 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-md65p_a49ef859-b876-474a-9cd2-4bab9f43799a/init/0.log" Oct 06 13:21:26 crc kubenswrapper[4698]: I1006 13:21:26.781496 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-md65p_a49ef859-b876-474a-9cd2-4bab9f43799a/init/0.log" Oct 06 13:21:26 crc kubenswrapper[4698]: I1006 13:21:26.979084 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-md65p_a49ef859-b876-474a-9cd2-4bab9f43799a/dnsmasq-dns/0.log" Oct 06 13:21:27 crc kubenswrapper[4698]: I1006 13:21:27.059280 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx_cb95c9b2-ec91-415c-851c-1d10cd61f0f4/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:27 crc kubenswrapper[4698]: I1006 13:21:27.203623 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8f518e2b-0a37-49eb-83f3-a393139e84c9/glance-httpd/0.log" Oct 06 13:21:27 crc kubenswrapper[4698]: I1006 13:21:27.218306 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8f518e2b-0a37-49eb-83f3-a393139e84c9/glance-log/0.log" Oct 06 13:21:27 crc kubenswrapper[4698]: I1006 13:21:27.403587 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0afe62d1-9751-4c32-820b-770b71e5599f/glance-httpd/0.log" Oct 06 13:21:27 crc kubenswrapper[4698]: I1006 13:21:27.435094 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0afe62d1-9751-4c32-820b-770b71e5599f/glance-log/0.log" Oct 06 13:21:27 crc kubenswrapper[4698]: I1006 13:21:27.677075 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-849d766464-jl8th_2b4da0ff-f7c0-47d2-b204-69c0da4ab453/horizon/0.log" Oct 06 13:21:27 crc kubenswrapper[4698]: I1006 13:21:27.737173 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-dl55v_8624f3b8-45df-4efd-b49f-33838276c948/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:27 crc kubenswrapper[4698]: I1006 13:21:27.926660 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8lz9t_fdcfa9c6-8380-471e-a9bb-1368772713a5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:28 crc kubenswrapper[4698]: I1006 13:21:28.230745 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-849d766464-jl8th_2b4da0ff-f7c0-47d2-b204-69c0da4ab453/horizon-log/0.log" Oct 06 13:21:28 crc kubenswrapper[4698]: I1006 13:21:28.254314 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29329261-9wv7t_4a476195-2a9a-4be4-8199-16903da18935/keystone-cron/0.log" Oct 06 13:21:28 crc kubenswrapper[4698]: I1006 13:21:28.416924 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c2b3ac80-8153-430c-893a-21c4cc2f2a5d/kube-state-metrics/0.log" Oct 06 13:21:28 crc kubenswrapper[4698]: I1006 13:21:28.582987 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-658b97bb55-lp7jm_59515f7e-0c54-4044-8b9a-45f3aebb9870/keystone-api/0.log" Oct 06 13:21:28 crc kubenswrapper[4698]: I1006 13:21:28.626383 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9_7a102252-962d-4cb3-970b-acd2557e633e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:29 crc kubenswrapper[4698]: I1006 13:21:29.187640 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c87999589-tj5hk_6d4d2004-223b-4b0e-9b88-229437567c01/neutron-api/0.log" Oct 06 13:21:29 crc kubenswrapper[4698]: I1006 13:21:29.268276 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h_c824e0ef-121e-428f-bf96-f9e1c87e57c6/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:29 crc kubenswrapper[4698]: I1006 13:21:29.285892 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c87999589-tj5hk_6d4d2004-223b-4b0e-9b88-229437567c01/neutron-httpd/0.log" Oct 06 13:21:30 crc kubenswrapper[4698]: I1006 13:21:30.059854 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_300ccf8e-2aa0-41c6-be99-b55c56ac8c73/nova-cell0-conductor-conductor/0.log" Oct 06 13:21:30 crc kubenswrapper[4698]: I1006 13:21:30.562149 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_fdb56a27-9290-42b9-9936-6de34abca79c/nova-cell1-conductor-conductor/0.log" Oct 06 13:21:30 crc kubenswrapper[4698]: I1006 13:21:30.650521 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_68fa4814-8052-4643-996f-ec7f189756e2/nova-api-log/0.log" Oct 06 13:21:31 crc kubenswrapper[4698]: I1006 13:21:31.020306 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f0828613-cf15-40b5-9af1-c13b856373bd/nova-cell1-novncproxy-novncproxy/0.log" Oct 06 13:21:31 crc kubenswrapper[4698]: I1006 13:21:31.084139 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_68fa4814-8052-4643-996f-ec7f189756e2/nova-api-api/0.log" Oct 06 13:21:31 crc kubenswrapper[4698]: I1006 13:21:31.252361 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-786mf_9853ba7c-85b2-4a97-ac8c-80be3f979248/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:31 crc kubenswrapper[4698]: I1006 13:21:31.427000 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7c3a05a9-7f25-4408-91b3-0ffa68c55545/nova-metadata-log/0.log" Oct 06 13:21:31 crc kubenswrapper[4698]: I1006 13:21:31.766221 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:31 crc kubenswrapper[4698]: I1006 13:21:31.822541 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:31 crc kubenswrapper[4698]: I1006 13:21:31.934904 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d1cd5e9b-2297-4e73-91d5-a1cd00ff8263/nova-scheduler-scheduler/0.log" Oct 06 13:21:31 crc kubenswrapper[4698]: I1006 13:21:31.980125 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b6df0e48-e5a1-42b9-a3f9-712a00716e38/mysql-bootstrap/0.log" Oct 06 13:21:32 crc kubenswrapper[4698]: I1006 13:21:32.024994 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-htslh"] Oct 06 13:21:32 crc kubenswrapper[4698]: I1006 13:21:32.167885 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b6df0e48-e5a1-42b9-a3f9-712a00716e38/mysql-bootstrap/0.log" Oct 06 13:21:32 crc kubenswrapper[4698]: I1006 13:21:32.242920 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b6df0e48-e5a1-42b9-a3f9-712a00716e38/galera/0.log" Oct 06 13:21:32 crc kubenswrapper[4698]: I1006 13:21:32.692096 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fa86326e-abe0-482b-94db-4579c8dfbc66/mysql-bootstrap/0.log" Oct 06 13:21:32 crc kubenswrapper[4698]: I1006 13:21:32.821372 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fa86326e-abe0-482b-94db-4579c8dfbc66/mysql-bootstrap/0.log" Oct 06 13:21:32 crc kubenswrapper[4698]: I1006 13:21:32.898364 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fa86326e-abe0-482b-94db-4579c8dfbc66/galera/0.log" Oct 06 13:21:33 crc kubenswrapper[4698]: I1006 13:21:33.125898 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_832ec6ae-a05c-4838-93d2-8957d3dcdc6a/openstackclient/0.log" Oct 06 13:21:33 crc kubenswrapper[4698]: I1006 13:21:33.329620 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4ktjh_f7d17b7b-03e7-4379-9c64-57d50be1882c/openstack-network-exporter/0.log" Oct 06 13:21:33 crc kubenswrapper[4698]: I1006 13:21:33.329925 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:21:33 crc kubenswrapper[4698]: E1006 13:21:33.330304 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:21:33 crc kubenswrapper[4698]: I1006 13:21:33.549111 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gx9kq_802f85d7-83b9-4361-ae5e-72d826586a43/ovsdb-server-init/0.log" Oct 06 13:21:33 crc kubenswrapper[4698]: I1006 13:21:33.633445 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-htslh" podUID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerName="registry-server" containerID="cri-o://c45822fdb56374dfd293be702dfdd9f829e270247881b564e0f8b714dd5a17f0" gracePeriod=2 Oct 06 13:21:33 crc kubenswrapper[4698]: I1006 13:21:33.712089 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gx9kq_802f85d7-83b9-4361-ae5e-72d826586a43/ovsdb-server-init/0.log" Oct 06 13:21:33 crc kubenswrapper[4698]: I1006 13:21:33.779241 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gx9kq_802f85d7-83b9-4361-ae5e-72d826586a43/ovs-vswitchd/0.log" Oct 06 13:21:33 crc kubenswrapper[4698]: I1006 13:21:33.914091 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gx9kq_802f85d7-83b9-4361-ae5e-72d826586a43/ovsdb-server/0.log" Oct 06 13:21:33 crc kubenswrapper[4698]: I1006 13:21:33.972932 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7c3a05a9-7f25-4408-91b3-0ffa68c55545/nova-metadata-metadata/0.log" Oct 06 13:21:34 crc kubenswrapper[4698]: E1006 13:21:34.027692 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b90be7f_dccd_4400_b20e_2fde8b0209b9.slice/crio-conmon-c45822fdb56374dfd293be702dfdd9f829e270247881b564e0f8b714dd5a17f0.scope\": RecentStats: unable to find data in memory cache]" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.129453 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qmjmg_7dd3b0e2-4d06-4c91-8539-4db08c7f2d23/ovn-controller/0.log" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.227110 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-gklpg_4112723d-ae85-4f84-867e-9219f74672ff/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.391420 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_711c60fb-212e-45d1-87c3-c15a97c60f90/openstack-network-exporter/0.log" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.439110 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_711c60fb-212e-45d1-87c3-c15a97c60f90/ovn-northd/0.log" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.643860 4698 generic.go:334] "Generic (PLEG): container finished" podID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerID="c45822fdb56374dfd293be702dfdd9f829e270247881b564e0f8b714dd5a17f0" exitCode=0 Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.643910 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htslh" event={"ID":"5b90be7f-dccd-4400-b20e-2fde8b0209b9","Type":"ContainerDied","Data":"c45822fdb56374dfd293be702dfdd9f829e270247881b564e0f8b714dd5a17f0"} Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.643942 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htslh" event={"ID":"5b90be7f-dccd-4400-b20e-2fde8b0209b9","Type":"ContainerDied","Data":"fb499ad56c43a4f67d001ba59e24b9f7be606ae25f2f2133f33ee3efcba0eead"} Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.643954 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb499ad56c43a4f67d001ba59e24b9f7be606ae25f2f2133f33ee3efcba0eead" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.651737 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.678474 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_513c9b58-394d-48dd-a0c9-7ea2f4643f25/openstack-network-exporter/0.log" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.682129 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_513c9b58-394d-48dd-a0c9-7ea2f4643f25/ovsdbserver-nb/0.log" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.827778 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n75gx\" (UniqueName: \"kubernetes.io/projected/5b90be7f-dccd-4400-b20e-2fde8b0209b9-kube-api-access-n75gx\") pod \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.827912 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-catalog-content\") pod \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.828077 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-utilities\") pod \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\" (UID: \"5b90be7f-dccd-4400-b20e-2fde8b0209b9\") " Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.829546 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-utilities" (OuterVolumeSpecName: "utilities") pod "5b90be7f-dccd-4400-b20e-2fde8b0209b9" (UID: "5b90be7f-dccd-4400-b20e-2fde8b0209b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.842158 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b90be7f-dccd-4400-b20e-2fde8b0209b9-kube-api-access-n75gx" (OuterVolumeSpecName: "kube-api-access-n75gx") pod "5b90be7f-dccd-4400-b20e-2fde8b0209b9" (UID: "5b90be7f-dccd-4400-b20e-2fde8b0209b9"). InnerVolumeSpecName "kube-api-access-n75gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.869367 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3024f021-f705-443b-a7e1-bcb574c25fe7/ovsdbserver-sb/0.log" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.905509 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b90be7f-dccd-4400-b20e-2fde8b0209b9" (UID: "5b90be7f-dccd-4400-b20e-2fde8b0209b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.911752 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3024f021-f705-443b-a7e1-bcb574c25fe7/openstack-network-exporter/0.log" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.932396 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.932440 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n75gx\" (UniqueName: \"kubernetes.io/projected/5b90be7f-dccd-4400-b20e-2fde8b0209b9-kube-api-access-n75gx\") on node \"crc\" DevicePath \"\"" Oct 06 13:21:34 crc kubenswrapper[4698]: I1006 13:21:34.932450 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b90be7f-dccd-4400-b20e-2fde8b0209b9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.320649 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-667d6544d-8ddpx_306a4319-6233-4455-85ac-b0c422603faf/placement-api/0.log" Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.468207 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-667d6544d-8ddpx_306a4319-6233-4455-85ac-b0c422603faf/placement-log/0.log" Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.529033 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34fdded6-e8a1-4564-bd6a-9ed17c9e57b5/init-config-reloader/0.log" Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.651396 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htslh" Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.677399 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-htslh"] Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.691990 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-htslh"] Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.713627 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34fdded6-e8a1-4564-bd6a-9ed17c9e57b5/config-reloader/0.log" Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.747155 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34fdded6-e8a1-4564-bd6a-9ed17c9e57b5/prometheus/0.log" Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.767475 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34fdded6-e8a1-4564-bd6a-9ed17c9e57b5/init-config-reloader/0.log" Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.916037 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34fdded6-e8a1-4564-bd6a-9ed17c9e57b5/thanos-sidecar/0.log" Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.926666 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_fa023504-b5d3-415a-a98c-8771aac74c06/memcached/0.log" Oct 06 13:21:35 crc kubenswrapper[4698]: I1006 13:21:35.984659 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0c4e83e2-715d-4418-a8b2-c4fe36f46192/setup-container/0.log" Oct 06 13:21:36 crc kubenswrapper[4698]: I1006 13:21:36.147518 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0c4e83e2-715d-4418-a8b2-c4fe36f46192/setup-container/0.log" Oct 06 13:21:36 crc kubenswrapper[4698]: I1006 13:21:36.155836 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0c4e83e2-715d-4418-a8b2-c4fe36f46192/rabbitmq/0.log" Oct 06 13:21:36 crc kubenswrapper[4698]: I1006 13:21:36.214544 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_770a4197-e506-41c8-921b-31db7abd83fe/setup-container/0.log" Oct 06 13:21:36 crc kubenswrapper[4698]: I1006 13:21:36.378498 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_770a4197-e506-41c8-921b-31db7abd83fe/rabbitmq/0.log" Oct 06 13:21:36 crc kubenswrapper[4698]: I1006 13:21:36.378601 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_770a4197-e506-41c8-921b-31db7abd83fe/setup-container/0.log" Oct 06 13:21:36 crc kubenswrapper[4698]: I1006 13:21:36.452499 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x_702cd121-45e6-44b8-bdc6-c97634e3307f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:36 crc kubenswrapper[4698]: I1006 13:21:36.590803 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-975wl_1aa6350f-22ad-49c6-b717-6b5db37d7b27/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:36 crc kubenswrapper[4698]: I1006 13:21:36.645533 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf_4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:36 crc kubenswrapper[4698]: I1006 13:21:36.795344 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gr96r_eab59609-328f-41d0-94e9-0f6bcd78eaa5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:36 crc kubenswrapper[4698]: I1006 13:21:36.833188 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-z9v58_2e1a78cf-8260-4c6c-88ed-fa72b63e10a9/ssh-known-hosts-edpm-deployment/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.058498 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7bd5b9f8ff-k9cfq_6900b347-8ed3-4474-b6b1-623471b2a03f/proxy-server/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.140080 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7bd5b9f8ff-k9cfq_6900b347-8ed3-4474-b6b1-623471b2a03f/proxy-httpd/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.188921 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-lhjtp_44a2d222-9a03-4483-a9dd-2708e7b3a5c7/swift-ring-rebalance/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.261970 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/account-auditor/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.305865 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/account-reaper/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.338757 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" path="/var/lib/kubelet/pods/5b90be7f-dccd-4400-b20e-2fde8b0209b9/volumes" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.432357 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/account-replicator/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.472803 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/container-auditor/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.493616 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/account-server/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.517163 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/container-replicator/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.623524 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/container-server/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.654559 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/container-updater/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.707988 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/object-expirer/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.712166 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/object-auditor/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.830277 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/object-replicator/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.868518 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/object-server/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.906792 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/rsync/0.log" Oct 06 13:21:37 crc kubenswrapper[4698]: I1006 13:21:37.908491 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/object-updater/0.log" Oct 06 13:21:38 crc kubenswrapper[4698]: I1006 13:21:38.012158 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/swift-recon-cron/0.log" Oct 06 13:21:38 crc kubenswrapper[4698]: I1006 13:21:38.146485 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-d5d62_ff7ed42f-2288-48ac-9f89-9305e2f4a151/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:38 crc kubenswrapper[4698]: I1006 13:21:38.229510 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_06bf5456-72f4-4eee-a851-c943572e317b/tempest-tests-tempest-tests-runner/0.log" Oct 06 13:21:38 crc kubenswrapper[4698]: I1006 13:21:38.340818 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e9b78c09-1a17-43bd-8c65-27d435435cf8/test-operator-logs-container/0.log" Oct 06 13:21:38 crc kubenswrapper[4698]: I1006 13:21:38.428190 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-q67vc_ecc55e3d-ca7e-41de-9f19-fb1b2857d398/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:21:39 crc kubenswrapper[4698]: I1006 13:21:39.155090 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_eac2f2ee-e5e6-4fb9-a527-47976859efe7/watcher-applier/0.log" Oct 06 13:21:39 crc kubenswrapper[4698]: I1006 13:21:39.564060 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_4c9514ee-65e0-4349-af35-8b7a65cf6bb9/watcher-api-log/0.log" Oct 06 13:21:40 crc kubenswrapper[4698]: I1006 13:21:40.154400 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_d8e278fa-3dfa-47dd-82b0-7296cc9ef08d/watcher-decision-engine/0.log" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.583789 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dzn8r"] Oct 06 13:21:41 crc kubenswrapper[4698]: E1006 13:21:41.584476 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerName="extract-content" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.584488 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerName="extract-content" Oct 06 13:21:41 crc kubenswrapper[4698]: E1006 13:21:41.584503 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerName="registry-server" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.584510 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerName="registry-server" Oct 06 13:21:41 crc kubenswrapper[4698]: E1006 13:21:41.584541 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerName="extract-utilities" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.584548 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerName="extract-utilities" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.584754 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b90be7f-dccd-4400-b20e-2fde8b0209b9" containerName="registry-server" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.586221 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.620915 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dzn8r"] Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.746139 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-utilities\") pod \"certified-operators-dzn8r\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.746261 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp95n\" (UniqueName: \"kubernetes.io/projected/99388708-95cc-411a-b5a6-bf5ee45e5efc-kube-api-access-qp95n\") pod \"certified-operators-dzn8r\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.746285 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-catalog-content\") pod \"certified-operators-dzn8r\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.848154 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-utilities\") pod \"certified-operators-dzn8r\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.848308 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp95n\" (UniqueName: \"kubernetes.io/projected/99388708-95cc-411a-b5a6-bf5ee45e5efc-kube-api-access-qp95n\") pod \"certified-operators-dzn8r\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.848346 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-catalog-content\") pod \"certified-operators-dzn8r\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.848907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-catalog-content\") pod \"certified-operators-dzn8r\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.849213 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-utilities\") pod \"certified-operators-dzn8r\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.869332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp95n\" (UniqueName: \"kubernetes.io/projected/99388708-95cc-411a-b5a6-bf5ee45e5efc-kube-api-access-qp95n\") pod \"certified-operators-dzn8r\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:41 crc kubenswrapper[4698]: I1006 13:21:41.919776 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:42 crc kubenswrapper[4698]: I1006 13:21:42.052854 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_4c9514ee-65e0-4349-af35-8b7a65cf6bb9/watcher-api/0.log" Oct 06 13:21:42 crc kubenswrapper[4698]: I1006 13:21:42.476876 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dzn8r"] Oct 06 13:21:42 crc kubenswrapper[4698]: I1006 13:21:42.723566 4698 generic.go:334] "Generic (PLEG): container finished" podID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerID="61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00" exitCode=0 Oct 06 13:21:42 crc kubenswrapper[4698]: I1006 13:21:42.723755 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzn8r" event={"ID":"99388708-95cc-411a-b5a6-bf5ee45e5efc","Type":"ContainerDied","Data":"61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00"} Oct 06 13:21:42 crc kubenswrapper[4698]: I1006 13:21:42.723862 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzn8r" event={"ID":"99388708-95cc-411a-b5a6-bf5ee45e5efc","Type":"ContainerStarted","Data":"af35d95137f1e081b92b08339989c30b69cf5e6bd65031b4295d5f749c2573d0"} Oct 06 13:21:43 crc kubenswrapper[4698]: I1006 13:21:43.734103 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzn8r" event={"ID":"99388708-95cc-411a-b5a6-bf5ee45e5efc","Type":"ContainerStarted","Data":"1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea"} Oct 06 13:21:44 crc kubenswrapper[4698]: I1006 13:21:44.329118 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:21:44 crc kubenswrapper[4698]: E1006 13:21:44.329589 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:21:44 crc kubenswrapper[4698]: I1006 13:21:44.744329 4698 generic.go:334] "Generic (PLEG): container finished" podID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerID="1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea" exitCode=0 Oct 06 13:21:44 crc kubenswrapper[4698]: I1006 13:21:44.744370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzn8r" event={"ID":"99388708-95cc-411a-b5a6-bf5ee45e5efc","Type":"ContainerDied","Data":"1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea"} Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.181519 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4kdf2"] Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.184847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.187377 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kdf2"] Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.361537 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-utilities\") pod \"redhat-marketplace-4kdf2\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.361595 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-catalog-content\") pod \"redhat-marketplace-4kdf2\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.361628 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8g2s\" (UniqueName: \"kubernetes.io/projected/36cb59e8-569c-47c2-875d-3e2db8449f11-kube-api-access-d8g2s\") pod \"redhat-marketplace-4kdf2\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.463709 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-utilities\") pod \"redhat-marketplace-4kdf2\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.463763 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-catalog-content\") pod \"redhat-marketplace-4kdf2\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.463796 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8g2s\" (UniqueName: \"kubernetes.io/projected/36cb59e8-569c-47c2-875d-3e2db8449f11-kube-api-access-d8g2s\") pod \"redhat-marketplace-4kdf2\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.464417 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-utilities\") pod \"redhat-marketplace-4kdf2\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.464486 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-catalog-content\") pod \"redhat-marketplace-4kdf2\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.494253 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8g2s\" (UniqueName: \"kubernetes.io/projected/36cb59e8-569c-47c2-875d-3e2db8449f11-kube-api-access-d8g2s\") pod \"redhat-marketplace-4kdf2\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.605774 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.755041 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzn8r" event={"ID":"99388708-95cc-411a-b5a6-bf5ee45e5efc","Type":"ContainerStarted","Data":"2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4"} Oct 06 13:21:45 crc kubenswrapper[4698]: I1006 13:21:45.792589 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dzn8r" podStartSLOduration=2.393602698 podStartE2EDuration="4.792569479s" podCreationTimestamp="2025-10-06 13:21:41 +0000 UTC" firstStartedPulling="2025-10-06 13:21:42.725177809 +0000 UTC m=+5790.137869982" lastFinishedPulling="2025-10-06 13:21:45.12414459 +0000 UTC m=+5792.536836763" observedRunningTime="2025-10-06 13:21:45.778876131 +0000 UTC m=+5793.191568304" watchObservedRunningTime="2025-10-06 13:21:45.792569479 +0000 UTC m=+5793.205261652" Oct 06 13:21:46 crc kubenswrapper[4698]: I1006 13:21:46.109274 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kdf2"] Oct 06 13:21:46 crc kubenswrapper[4698]: I1006 13:21:46.765105 4698 generic.go:334] "Generic (PLEG): container finished" podID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerID="56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609" exitCode=0 Oct 06 13:21:46 crc kubenswrapper[4698]: I1006 13:21:46.765180 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kdf2" event={"ID":"36cb59e8-569c-47c2-875d-3e2db8449f11","Type":"ContainerDied","Data":"56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609"} Oct 06 13:21:46 crc kubenswrapper[4698]: I1006 13:21:46.765408 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kdf2" event={"ID":"36cb59e8-569c-47c2-875d-3e2db8449f11","Type":"ContainerStarted","Data":"d3284f357f5d2999f95269bf567b0e947f1dbea7d78d3172a5e1064cf5a8fbf1"} Oct 06 13:21:47 crc kubenswrapper[4698]: I1006 13:21:47.776370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kdf2" event={"ID":"36cb59e8-569c-47c2-875d-3e2db8449f11","Type":"ContainerStarted","Data":"ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f"} Oct 06 13:21:48 crc kubenswrapper[4698]: I1006 13:21:48.785571 4698 generic.go:334] "Generic (PLEG): container finished" podID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerID="ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f" exitCode=0 Oct 06 13:21:48 crc kubenswrapper[4698]: I1006 13:21:48.785623 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kdf2" event={"ID":"36cb59e8-569c-47c2-875d-3e2db8449f11","Type":"ContainerDied","Data":"ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f"} Oct 06 13:21:49 crc kubenswrapper[4698]: I1006 13:21:49.795729 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kdf2" event={"ID":"36cb59e8-569c-47c2-875d-3e2db8449f11","Type":"ContainerStarted","Data":"2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2"} Oct 06 13:21:49 crc kubenswrapper[4698]: I1006 13:21:49.810804 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4kdf2" podStartSLOduration=2.3949615140000002 podStartE2EDuration="4.810778001s" podCreationTimestamp="2025-10-06 13:21:45 +0000 UTC" firstStartedPulling="2025-10-06 13:21:46.767613327 +0000 UTC m=+5794.180305500" lastFinishedPulling="2025-10-06 13:21:49.183429814 +0000 UTC m=+5796.596121987" observedRunningTime="2025-10-06 13:21:49.810504005 +0000 UTC m=+5797.223196188" watchObservedRunningTime="2025-10-06 13:21:49.810778001 +0000 UTC m=+5797.223470174" Oct 06 13:21:51 crc kubenswrapper[4698]: I1006 13:21:51.920551 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:51 crc kubenswrapper[4698]: I1006 13:21:51.921027 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:51 crc kubenswrapper[4698]: I1006 13:21:51.973029 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:52 crc kubenswrapper[4698]: I1006 13:21:52.865640 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:53 crc kubenswrapper[4698]: I1006 13:21:53.967588 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dzn8r"] Oct 06 13:21:54 crc kubenswrapper[4698]: I1006 13:21:54.836032 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dzn8r" podUID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerName="registry-server" containerID="cri-o://2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4" gracePeriod=2 Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.384739 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.472181 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-catalog-content\") pod \"99388708-95cc-411a-b5a6-bf5ee45e5efc\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.472322 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp95n\" (UniqueName: \"kubernetes.io/projected/99388708-95cc-411a-b5a6-bf5ee45e5efc-kube-api-access-qp95n\") pod \"99388708-95cc-411a-b5a6-bf5ee45e5efc\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.472473 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-utilities\") pod \"99388708-95cc-411a-b5a6-bf5ee45e5efc\" (UID: \"99388708-95cc-411a-b5a6-bf5ee45e5efc\") " Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.473177 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-utilities" (OuterVolumeSpecName: "utilities") pod "99388708-95cc-411a-b5a6-bf5ee45e5efc" (UID: "99388708-95cc-411a-b5a6-bf5ee45e5efc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.490738 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99388708-95cc-411a-b5a6-bf5ee45e5efc-kube-api-access-qp95n" (OuterVolumeSpecName: "kube-api-access-qp95n") pod "99388708-95cc-411a-b5a6-bf5ee45e5efc" (UID: "99388708-95cc-411a-b5a6-bf5ee45e5efc"). InnerVolumeSpecName "kube-api-access-qp95n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.515701 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99388708-95cc-411a-b5a6-bf5ee45e5efc" (UID: "99388708-95cc-411a-b5a6-bf5ee45e5efc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.574346 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp95n\" (UniqueName: \"kubernetes.io/projected/99388708-95cc-411a-b5a6-bf5ee45e5efc-kube-api-access-qp95n\") on node \"crc\" DevicePath \"\"" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.574397 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.574407 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99388708-95cc-411a-b5a6-bf5ee45e5efc-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.605913 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.606081 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.675940 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.846679 4698 generic.go:334] "Generic (PLEG): container finished" podID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerID="2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4" exitCode=0 Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.846771 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzn8r" event={"ID":"99388708-95cc-411a-b5a6-bf5ee45e5efc","Type":"ContainerDied","Data":"2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4"} Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.846810 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzn8r" event={"ID":"99388708-95cc-411a-b5a6-bf5ee45e5efc","Type":"ContainerDied","Data":"af35d95137f1e081b92b08339989c30b69cf5e6bd65031b4295d5f749c2573d0"} Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.846831 4698 scope.go:117] "RemoveContainer" containerID="2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.846843 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzn8r" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.881206 4698 scope.go:117] "RemoveContainer" containerID="1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.898532 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dzn8r"] Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.906288 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dzn8r"] Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.909184 4698 scope.go:117] "RemoveContainer" containerID="61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.928536 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.982046 4698 scope.go:117] "RemoveContainer" containerID="2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4" Oct 06 13:21:55 crc kubenswrapper[4698]: E1006 13:21:55.982473 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4\": container with ID starting with 2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4 not found: ID does not exist" containerID="2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.982506 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4"} err="failed to get container status \"2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4\": rpc error: code = NotFound desc = could not find container \"2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4\": container with ID starting with 2f491bfa454c752a329014a2987c7d9a29646bcb873ae518d0e4b511200d2af4 not found: ID does not exist" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.982526 4698 scope.go:117] "RemoveContainer" containerID="1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea" Oct 06 13:21:55 crc kubenswrapper[4698]: E1006 13:21:55.983162 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea\": container with ID starting with 1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea not found: ID does not exist" containerID="1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.983183 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea"} err="failed to get container status \"1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea\": rpc error: code = NotFound desc = could not find container \"1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea\": container with ID starting with 1132ff43f188583660d32c79e62126bb3f4b53f2c2c37ea30e940bd129d1e9ea not found: ID does not exist" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.983195 4698 scope.go:117] "RemoveContainer" containerID="61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00" Oct 06 13:21:55 crc kubenswrapper[4698]: E1006 13:21:55.983390 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00\": container with ID starting with 61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00 not found: ID does not exist" containerID="61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00" Oct 06 13:21:55 crc kubenswrapper[4698]: I1006 13:21:55.983419 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00"} err="failed to get container status \"61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00\": rpc error: code = NotFound desc = could not find container \"61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00\": container with ID starting with 61812c6d34f10a3f95602c97289f77b328775dfe769cd6fb44e9586e46c6ab00 not found: ID does not exist" Oct 06 13:21:57 crc kubenswrapper[4698]: I1006 13:21:57.358635 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99388708-95cc-411a-b5a6-bf5ee45e5efc" path="/var/lib/kubelet/pods/99388708-95cc-411a-b5a6-bf5ee45e5efc/volumes" Oct 06 13:21:57 crc kubenswrapper[4698]: I1006 13:21:57.970904 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kdf2"] Oct 06 13:21:58 crc kubenswrapper[4698]: I1006 13:21:58.885627 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4kdf2" podUID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerName="registry-server" containerID="cri-o://2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2" gracePeriod=2 Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.330545 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:21:59 crc kubenswrapper[4698]: E1006 13:21:59.331169 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.397962 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.564637 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-catalog-content\") pod \"36cb59e8-569c-47c2-875d-3e2db8449f11\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.564798 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8g2s\" (UniqueName: \"kubernetes.io/projected/36cb59e8-569c-47c2-875d-3e2db8449f11-kube-api-access-d8g2s\") pod \"36cb59e8-569c-47c2-875d-3e2db8449f11\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.564869 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-utilities\") pod \"36cb59e8-569c-47c2-875d-3e2db8449f11\" (UID: \"36cb59e8-569c-47c2-875d-3e2db8449f11\") " Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.566396 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-utilities" (OuterVolumeSpecName: "utilities") pod "36cb59e8-569c-47c2-875d-3e2db8449f11" (UID: "36cb59e8-569c-47c2-875d-3e2db8449f11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.574548 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cb59e8-569c-47c2-875d-3e2db8449f11-kube-api-access-d8g2s" (OuterVolumeSpecName: "kube-api-access-d8g2s") pod "36cb59e8-569c-47c2-875d-3e2db8449f11" (UID: "36cb59e8-569c-47c2-875d-3e2db8449f11"). InnerVolumeSpecName "kube-api-access-d8g2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.585095 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36cb59e8-569c-47c2-875d-3e2db8449f11" (UID: "36cb59e8-569c-47c2-875d-3e2db8449f11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.669725 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.670095 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36cb59e8-569c-47c2-875d-3e2db8449f11-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.670196 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8g2s\" (UniqueName: \"kubernetes.io/projected/36cb59e8-569c-47c2-875d-3e2db8449f11-kube-api-access-d8g2s\") on node \"crc\" DevicePath \"\"" Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.902099 4698 generic.go:334] "Generic (PLEG): container finished" podID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerID="2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2" exitCode=0 Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.902159 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kdf2" event={"ID":"36cb59e8-569c-47c2-875d-3e2db8449f11","Type":"ContainerDied","Data":"2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2"} Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.902199 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kdf2" event={"ID":"36cb59e8-569c-47c2-875d-3e2db8449f11","Type":"ContainerDied","Data":"d3284f357f5d2999f95269bf567b0e947f1dbea7d78d3172a5e1064cf5a8fbf1"} Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.902226 4698 scope.go:117] "RemoveContainer" containerID="2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2" Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.903273 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kdf2" Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.953534 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kdf2"] Oct 06 13:21:59 crc kubenswrapper[4698]: I1006 13:21:59.967155 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kdf2"] Oct 06 13:22:00 crc kubenswrapper[4698]: I1006 13:22:00.023959 4698 scope.go:117] "RemoveContainer" containerID="ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f" Oct 06 13:22:00 crc kubenswrapper[4698]: I1006 13:22:00.063201 4698 scope.go:117] "RemoveContainer" containerID="56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609" Oct 06 13:22:00 crc kubenswrapper[4698]: I1006 13:22:00.129572 4698 scope.go:117] "RemoveContainer" containerID="2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2" Oct 06 13:22:00 crc kubenswrapper[4698]: E1006 13:22:00.130546 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2\": container with ID starting with 2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2 not found: ID does not exist" containerID="2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2" Oct 06 13:22:00 crc kubenswrapper[4698]: I1006 13:22:00.130605 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2"} err="failed to get container status \"2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2\": rpc error: code = NotFound desc = could not find container \"2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2\": container with ID starting with 2d77534d65fc1b3319b676962f61ad9c8e73a939bc2d94504c930e84bde98ab2 not found: ID does not exist" Oct 06 13:22:00 crc kubenswrapper[4698]: I1006 13:22:00.130633 4698 scope.go:117] "RemoveContainer" containerID="ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f" Oct 06 13:22:00 crc kubenswrapper[4698]: E1006 13:22:00.131200 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f\": container with ID starting with ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f not found: ID does not exist" containerID="ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f" Oct 06 13:22:00 crc kubenswrapper[4698]: I1006 13:22:00.131245 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f"} err="failed to get container status \"ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f\": rpc error: code = NotFound desc = could not find container \"ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f\": container with ID starting with ea139d8f5c5627297e0e1d133dd7345af8b6704e896d6ef47a3fe2be45ed109f not found: ID does not exist" Oct 06 13:22:00 crc kubenswrapper[4698]: I1006 13:22:00.131274 4698 scope.go:117] "RemoveContainer" containerID="56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609" Oct 06 13:22:00 crc kubenswrapper[4698]: E1006 13:22:00.131537 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609\": container with ID starting with 56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609 not found: ID does not exist" containerID="56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609" Oct 06 13:22:00 crc kubenswrapper[4698]: I1006 13:22:00.131567 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609"} err="failed to get container status \"56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609\": rpc error: code = NotFound desc = could not find container \"56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609\": container with ID starting with 56f7826380faa2eb628e2202cecc6a50efcb8af548b10e1c382c5a0ff0af9609 not found: ID does not exist" Oct 06 13:22:01 crc kubenswrapper[4698]: I1006 13:22:01.346198 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36cb59e8-569c-47c2-875d-3e2db8449f11" path="/var/lib/kubelet/pods/36cb59e8-569c-47c2-875d-3e2db8449f11/volumes" Oct 06 13:22:10 crc kubenswrapper[4698]: I1006 13:22:10.329464 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:22:10 crc kubenswrapper[4698]: E1006 13:22:10.330640 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:22:24 crc kubenswrapper[4698]: I1006 13:22:24.329673 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:22:24 crc kubenswrapper[4698]: E1006 13:22:24.330902 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:22:28 crc kubenswrapper[4698]: I1006 13:22:28.315954 4698 generic.go:334] "Generic (PLEG): container finished" podID="d0f73a38-6cb0-4c3e-889d-fa24328b8f74" containerID="11f09922956c3a20d8fd17ef38f97200ec74e9a00d3ecf365b5340971fbe6723" exitCode=0 Oct 06 13:22:28 crc kubenswrapper[4698]: I1006 13:22:28.316071 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/crc-debug-dljv2" event={"ID":"d0f73a38-6cb0-4c3e-889d-fa24328b8f74","Type":"ContainerDied","Data":"11f09922956c3a20d8fd17ef38f97200ec74e9a00d3ecf365b5340971fbe6723"} Oct 06 13:22:29 crc kubenswrapper[4698]: I1006 13:22:29.444602 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-dljv2" Oct 06 13:22:29 crc kubenswrapper[4698]: I1006 13:22:29.478803 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-htmrk/crc-debug-dljv2"] Oct 06 13:22:29 crc kubenswrapper[4698]: I1006 13:22:29.488083 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-htmrk/crc-debug-dljv2"] Oct 06 13:22:29 crc kubenswrapper[4698]: I1006 13:22:29.519077 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvfrg\" (UniqueName: \"kubernetes.io/projected/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-kube-api-access-pvfrg\") pod \"d0f73a38-6cb0-4c3e-889d-fa24328b8f74\" (UID: \"d0f73a38-6cb0-4c3e-889d-fa24328b8f74\") " Oct 06 13:22:29 crc kubenswrapper[4698]: I1006 13:22:29.519319 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-host\") pod \"d0f73a38-6cb0-4c3e-889d-fa24328b8f74\" (UID: \"d0f73a38-6cb0-4c3e-889d-fa24328b8f74\") " Oct 06 13:22:29 crc kubenswrapper[4698]: I1006 13:22:29.519366 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-host" (OuterVolumeSpecName: "host") pod "d0f73a38-6cb0-4c3e-889d-fa24328b8f74" (UID: "d0f73a38-6cb0-4c3e-889d-fa24328b8f74"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:22:29 crc kubenswrapper[4698]: I1006 13:22:29.519874 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:22:29 crc kubenswrapper[4698]: I1006 13:22:29.525309 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-kube-api-access-pvfrg" (OuterVolumeSpecName: "kube-api-access-pvfrg") pod "d0f73a38-6cb0-4c3e-889d-fa24328b8f74" (UID: "d0f73a38-6cb0-4c3e-889d-fa24328b8f74"). InnerVolumeSpecName "kube-api-access-pvfrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:22:29 crc kubenswrapper[4698]: I1006 13:22:29.621521 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvfrg\" (UniqueName: \"kubernetes.io/projected/d0f73a38-6cb0-4c3e-889d-fa24328b8f74-kube-api-access-pvfrg\") on node \"crc\" DevicePath \"\"" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.336534 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78cee86cfc11b14fb6a98ca1aa958ab2b8921543e3d8cea9be4c7181cbc70859" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.336585 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-dljv2" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.686077 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-htmrk/crc-debug-vhclz"] Oct 06 13:22:30 crc kubenswrapper[4698]: E1006 13:22:30.687174 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerName="extract-content" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.687204 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerName="extract-content" Oct 06 13:22:30 crc kubenswrapper[4698]: E1006 13:22:30.687246 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerName="registry-server" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.687265 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerName="registry-server" Oct 06 13:22:30 crc kubenswrapper[4698]: E1006 13:22:30.687291 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerName="registry-server" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.687309 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerName="registry-server" Oct 06 13:22:30 crc kubenswrapper[4698]: E1006 13:22:30.687381 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerName="extract-content" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.687396 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerName="extract-content" Oct 06 13:22:30 crc kubenswrapper[4698]: E1006 13:22:30.687443 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerName="extract-utilities" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.687457 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerName="extract-utilities" Oct 06 13:22:30 crc kubenswrapper[4698]: E1006 13:22:30.687503 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f73a38-6cb0-4c3e-889d-fa24328b8f74" containerName="container-00" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.687517 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f73a38-6cb0-4c3e-889d-fa24328b8f74" containerName="container-00" Oct 06 13:22:30 crc kubenswrapper[4698]: E1006 13:22:30.687549 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerName="extract-utilities" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.687563 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerName="extract-utilities" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.687955 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f73a38-6cb0-4c3e-889d-fa24328b8f74" containerName="container-00" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.688005 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="99388708-95cc-411a-b5a6-bf5ee45e5efc" containerName="registry-server" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.688138 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cb59e8-569c-47c2-875d-3e2db8449f11" containerName="registry-server" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.689328 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-vhclz" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.848804 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8218266a-39f3-45c3-ba61-6711aff1595f-host\") pod \"crc-debug-vhclz\" (UID: \"8218266a-39f3-45c3-ba61-6711aff1595f\") " pod="openshift-must-gather-htmrk/crc-debug-vhclz" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.849324 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45x84\" (UniqueName: \"kubernetes.io/projected/8218266a-39f3-45c3-ba61-6711aff1595f-kube-api-access-45x84\") pod \"crc-debug-vhclz\" (UID: \"8218266a-39f3-45c3-ba61-6711aff1595f\") " pod="openshift-must-gather-htmrk/crc-debug-vhclz" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.952387 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8218266a-39f3-45c3-ba61-6711aff1595f-host\") pod \"crc-debug-vhclz\" (UID: \"8218266a-39f3-45c3-ba61-6711aff1595f\") " pod="openshift-must-gather-htmrk/crc-debug-vhclz" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.952663 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8218266a-39f3-45c3-ba61-6711aff1595f-host\") pod \"crc-debug-vhclz\" (UID: \"8218266a-39f3-45c3-ba61-6711aff1595f\") " pod="openshift-must-gather-htmrk/crc-debug-vhclz" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.952953 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45x84\" (UniqueName: \"kubernetes.io/projected/8218266a-39f3-45c3-ba61-6711aff1595f-kube-api-access-45x84\") pod \"crc-debug-vhclz\" (UID: \"8218266a-39f3-45c3-ba61-6711aff1595f\") " pod="openshift-must-gather-htmrk/crc-debug-vhclz" Oct 06 13:22:30 crc kubenswrapper[4698]: I1006 13:22:30.978949 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45x84\" (UniqueName: \"kubernetes.io/projected/8218266a-39f3-45c3-ba61-6711aff1595f-kube-api-access-45x84\") pod \"crc-debug-vhclz\" (UID: \"8218266a-39f3-45c3-ba61-6711aff1595f\") " pod="openshift-must-gather-htmrk/crc-debug-vhclz" Oct 06 13:22:31 crc kubenswrapper[4698]: I1006 13:22:31.016322 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-vhclz" Oct 06 13:22:31 crc kubenswrapper[4698]: I1006 13:22:31.341981 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f73a38-6cb0-4c3e-889d-fa24328b8f74" path="/var/lib/kubelet/pods/d0f73a38-6cb0-4c3e-889d-fa24328b8f74/volumes" Oct 06 13:22:31 crc kubenswrapper[4698]: I1006 13:22:31.348904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/crc-debug-vhclz" event={"ID":"8218266a-39f3-45c3-ba61-6711aff1595f","Type":"ContainerStarted","Data":"4d19786a6933bdcd0c6f0f3903e24276a93f4bba4eabbec4b59b8203f49ea2f5"} Oct 06 13:22:31 crc kubenswrapper[4698]: I1006 13:22:31.348949 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/crc-debug-vhclz" event={"ID":"8218266a-39f3-45c3-ba61-6711aff1595f","Type":"ContainerStarted","Data":"6c46900747540b8d4542923b02b7e801327b6a5e109eabc3169ba3956abe1bc4"} Oct 06 13:22:31 crc kubenswrapper[4698]: I1006 13:22:31.386958 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-htmrk/crc-debug-vhclz" podStartSLOduration=1.3864311630000001 podStartE2EDuration="1.386431163s" podCreationTimestamp="2025-10-06 13:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:22:31.369127786 +0000 UTC m=+5838.781819959" watchObservedRunningTime="2025-10-06 13:22:31.386431163 +0000 UTC m=+5838.799123366" Oct 06 13:22:32 crc kubenswrapper[4698]: I1006 13:22:32.360925 4698 generic.go:334] "Generic (PLEG): container finished" podID="8218266a-39f3-45c3-ba61-6711aff1595f" containerID="4d19786a6933bdcd0c6f0f3903e24276a93f4bba4eabbec4b59b8203f49ea2f5" exitCode=0 Oct 06 13:22:32 crc kubenswrapper[4698]: I1006 13:22:32.360979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/crc-debug-vhclz" event={"ID":"8218266a-39f3-45c3-ba61-6711aff1595f","Type":"ContainerDied","Data":"4d19786a6933bdcd0c6f0f3903e24276a93f4bba4eabbec4b59b8203f49ea2f5"} Oct 06 13:22:33 crc kubenswrapper[4698]: I1006 13:22:33.471627 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-vhclz" Oct 06 13:22:33 crc kubenswrapper[4698]: I1006 13:22:33.595717 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45x84\" (UniqueName: \"kubernetes.io/projected/8218266a-39f3-45c3-ba61-6711aff1595f-kube-api-access-45x84\") pod \"8218266a-39f3-45c3-ba61-6711aff1595f\" (UID: \"8218266a-39f3-45c3-ba61-6711aff1595f\") " Oct 06 13:22:33 crc kubenswrapper[4698]: I1006 13:22:33.595905 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8218266a-39f3-45c3-ba61-6711aff1595f-host\") pod \"8218266a-39f3-45c3-ba61-6711aff1595f\" (UID: \"8218266a-39f3-45c3-ba61-6711aff1595f\") " Oct 06 13:22:33 crc kubenswrapper[4698]: I1006 13:22:33.596819 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8218266a-39f3-45c3-ba61-6711aff1595f-host" (OuterVolumeSpecName: "host") pod "8218266a-39f3-45c3-ba61-6711aff1595f" (UID: "8218266a-39f3-45c3-ba61-6711aff1595f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:22:33 crc kubenswrapper[4698]: I1006 13:22:33.606530 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8218266a-39f3-45c3-ba61-6711aff1595f-kube-api-access-45x84" (OuterVolumeSpecName: "kube-api-access-45x84") pod "8218266a-39f3-45c3-ba61-6711aff1595f" (UID: "8218266a-39f3-45c3-ba61-6711aff1595f"). InnerVolumeSpecName "kube-api-access-45x84". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:22:33 crc kubenswrapper[4698]: I1006 13:22:33.697967 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45x84\" (UniqueName: \"kubernetes.io/projected/8218266a-39f3-45c3-ba61-6711aff1595f-kube-api-access-45x84\") on node \"crc\" DevicePath \"\"" Oct 06 13:22:33 crc kubenswrapper[4698]: I1006 13:22:33.697994 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8218266a-39f3-45c3-ba61-6711aff1595f-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:22:34 crc kubenswrapper[4698]: I1006 13:22:34.382208 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/crc-debug-vhclz" event={"ID":"8218266a-39f3-45c3-ba61-6711aff1595f","Type":"ContainerDied","Data":"6c46900747540b8d4542923b02b7e801327b6a5e109eabc3169ba3956abe1bc4"} Oct 06 13:22:34 crc kubenswrapper[4698]: I1006 13:22:34.382488 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c46900747540b8d4542923b02b7e801327b6a5e109eabc3169ba3956abe1bc4" Oct 06 13:22:34 crc kubenswrapper[4698]: I1006 13:22:34.382308 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-vhclz" Oct 06 13:22:37 crc kubenswrapper[4698]: I1006 13:22:37.329383 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:22:37 crc kubenswrapper[4698]: E1006 13:22:37.330135 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:22:40 crc kubenswrapper[4698]: I1006 13:22:40.469312 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-htmrk/crc-debug-vhclz"] Oct 06 13:22:40 crc kubenswrapper[4698]: I1006 13:22:40.477407 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-htmrk/crc-debug-vhclz"] Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.342723 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8218266a-39f3-45c3-ba61-6711aff1595f" path="/var/lib/kubelet/pods/8218266a-39f3-45c3-ba61-6711aff1595f/volumes" Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.716101 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-htmrk/crc-debug-82n9f"] Oct 06 13:22:41 crc kubenswrapper[4698]: E1006 13:22:41.716678 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8218266a-39f3-45c3-ba61-6711aff1595f" containerName="container-00" Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.716699 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8218266a-39f3-45c3-ba61-6711aff1595f" containerName="container-00" Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.717139 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8218266a-39f3-45c3-ba61-6711aff1595f" containerName="container-00" Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.718240 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-82n9f" Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.841811 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7vtp\" (UniqueName: \"kubernetes.io/projected/bfaf622e-8d10-4769-bb32-f1b4677f9b20-kube-api-access-r7vtp\") pod \"crc-debug-82n9f\" (UID: \"bfaf622e-8d10-4769-bb32-f1b4677f9b20\") " pod="openshift-must-gather-htmrk/crc-debug-82n9f" Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.841977 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfaf622e-8d10-4769-bb32-f1b4677f9b20-host\") pod \"crc-debug-82n9f\" (UID: \"bfaf622e-8d10-4769-bb32-f1b4677f9b20\") " pod="openshift-must-gather-htmrk/crc-debug-82n9f" Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.944578 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7vtp\" (UniqueName: \"kubernetes.io/projected/bfaf622e-8d10-4769-bb32-f1b4677f9b20-kube-api-access-r7vtp\") pod \"crc-debug-82n9f\" (UID: \"bfaf622e-8d10-4769-bb32-f1b4677f9b20\") " pod="openshift-must-gather-htmrk/crc-debug-82n9f" Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.944691 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfaf622e-8d10-4769-bb32-f1b4677f9b20-host\") pod \"crc-debug-82n9f\" (UID: \"bfaf622e-8d10-4769-bb32-f1b4677f9b20\") " pod="openshift-must-gather-htmrk/crc-debug-82n9f" Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.944935 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfaf622e-8d10-4769-bb32-f1b4677f9b20-host\") pod \"crc-debug-82n9f\" (UID: \"bfaf622e-8d10-4769-bb32-f1b4677f9b20\") " pod="openshift-must-gather-htmrk/crc-debug-82n9f" Oct 06 13:22:41 crc kubenswrapper[4698]: I1006 13:22:41.970208 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7vtp\" (UniqueName: \"kubernetes.io/projected/bfaf622e-8d10-4769-bb32-f1b4677f9b20-kube-api-access-r7vtp\") pod \"crc-debug-82n9f\" (UID: \"bfaf622e-8d10-4769-bb32-f1b4677f9b20\") " pod="openshift-must-gather-htmrk/crc-debug-82n9f" Oct 06 13:22:42 crc kubenswrapper[4698]: I1006 13:22:42.039228 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-82n9f" Oct 06 13:22:42 crc kubenswrapper[4698]: W1006 13:22:42.094341 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfaf622e_8d10_4769_bb32_f1b4677f9b20.slice/crio-bbe82c627767f0323d0501c5f83fdf799ceb12589f15c2e0be96b1359a428464 WatchSource:0}: Error finding container bbe82c627767f0323d0501c5f83fdf799ceb12589f15c2e0be96b1359a428464: Status 404 returned error can't find the container with id bbe82c627767f0323d0501c5f83fdf799ceb12589f15c2e0be96b1359a428464 Oct 06 13:22:42 crc kubenswrapper[4698]: I1006 13:22:42.462781 4698 generic.go:334] "Generic (PLEG): container finished" podID="bfaf622e-8d10-4769-bb32-f1b4677f9b20" containerID="f3faa9b0bf05d474fc9ddc8b58f80d65ad96d918b52d948a6173bfb4d9ae70df" exitCode=0 Oct 06 13:22:42 crc kubenswrapper[4698]: I1006 13:22:42.462871 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/crc-debug-82n9f" event={"ID":"bfaf622e-8d10-4769-bb32-f1b4677f9b20","Type":"ContainerDied","Data":"f3faa9b0bf05d474fc9ddc8b58f80d65ad96d918b52d948a6173bfb4d9ae70df"} Oct 06 13:22:42 crc kubenswrapper[4698]: I1006 13:22:42.463113 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/crc-debug-82n9f" event={"ID":"bfaf622e-8d10-4769-bb32-f1b4677f9b20","Type":"ContainerStarted","Data":"bbe82c627767f0323d0501c5f83fdf799ceb12589f15c2e0be96b1359a428464"} Oct 06 13:22:42 crc kubenswrapper[4698]: I1006 13:22:42.511477 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-htmrk/crc-debug-82n9f"] Oct 06 13:22:42 crc kubenswrapper[4698]: I1006 13:22:42.522722 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-htmrk/crc-debug-82n9f"] Oct 06 13:22:43 crc kubenswrapper[4698]: I1006 13:22:43.595924 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-82n9f" Oct 06 13:22:43 crc kubenswrapper[4698]: I1006 13:22:43.675787 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfaf622e-8d10-4769-bb32-f1b4677f9b20-host\") pod \"bfaf622e-8d10-4769-bb32-f1b4677f9b20\" (UID: \"bfaf622e-8d10-4769-bb32-f1b4677f9b20\") " Oct 06 13:22:43 crc kubenswrapper[4698]: I1006 13:22:43.675968 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfaf622e-8d10-4769-bb32-f1b4677f9b20-host" (OuterVolumeSpecName: "host") pod "bfaf622e-8d10-4769-bb32-f1b4677f9b20" (UID: "bfaf622e-8d10-4769-bb32-f1b4677f9b20"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:22:43 crc kubenswrapper[4698]: I1006 13:22:43.676070 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7vtp\" (UniqueName: \"kubernetes.io/projected/bfaf622e-8d10-4769-bb32-f1b4677f9b20-kube-api-access-r7vtp\") pod \"bfaf622e-8d10-4769-bb32-f1b4677f9b20\" (UID: \"bfaf622e-8d10-4769-bb32-f1b4677f9b20\") " Oct 06 13:22:43 crc kubenswrapper[4698]: I1006 13:22:43.676525 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfaf622e-8d10-4769-bb32-f1b4677f9b20-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:22:43 crc kubenswrapper[4698]: I1006 13:22:43.681423 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfaf622e-8d10-4769-bb32-f1b4677f9b20-kube-api-access-r7vtp" (OuterVolumeSpecName: "kube-api-access-r7vtp") pod "bfaf622e-8d10-4769-bb32-f1b4677f9b20" (UID: "bfaf622e-8d10-4769-bb32-f1b4677f9b20"). InnerVolumeSpecName "kube-api-access-r7vtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:22:43 crc kubenswrapper[4698]: I1006 13:22:43.777821 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7vtp\" (UniqueName: \"kubernetes.io/projected/bfaf622e-8d10-4769-bb32-f1b4677f9b20-kube-api-access-r7vtp\") on node \"crc\" DevicePath \"\"" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.278809 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/util/0.log" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.413976 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/util/0.log" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.465340 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/pull/0.log" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.479472 4698 scope.go:117] "RemoveContainer" containerID="f3faa9b0bf05d474fc9ddc8b58f80d65ad96d918b52d948a6173bfb4d9ae70df" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.479549 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/crc-debug-82n9f" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.508724 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/pull/0.log" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.683802 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/util/0.log" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.686709 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/pull/0.log" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.702869 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/extract/0.log" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.839364 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-fgmjd_d2432ca3-e684-4c81-95c8-1e57826d09d6/kube-rbac-proxy/0.log" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.926554 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-fgmjd_d2432ca3-e684-4c81-95c8-1e57826d09d6/manager/0.log" Oct 06 13:22:44 crc kubenswrapper[4698]: I1006 13:22:44.934929 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7d4d4f8d-wvf75_0b715645-3bcb-4443-892b-e30062c78a7f/kube-rbac-proxy/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.076637 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7d4d4f8d-wvf75_0b715645-3bcb-4443-892b-e30062c78a7f/manager/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.106716 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-jncqt_b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1/kube-rbac-proxy/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.140767 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-jncqt_b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1/manager/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.338223 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfaf622e-8d10-4769-bb32-f1b4677f9b20" path="/var/lib/kubelet/pods/bfaf622e-8d10-4769-bb32-f1b4677f9b20/volumes" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.381752 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-tnv74_e2d5b718-b49a-46c0-9f1d-1e536ff62301/manager/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.386603 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-tnv74_e2d5b718-b49a-46c0-9f1d-1e536ff62301/kube-rbac-proxy/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.487036 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-689sr_110b7f13-850f-41a3-aadb-df0f5559ba1d/kube-rbac-proxy/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.572955 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-gprg9_92e02173-4289-4b84-b3b2-01b78d0a7205/kube-rbac-proxy/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.573961 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-689sr_110b7f13-850f-41a3-aadb-df0f5559ba1d/manager/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.717681 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-gprg9_92e02173-4289-4b84-b3b2-01b78d0a7205/manager/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.827420 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-p5rgb_c03f5f3c-6e6c-4eba-9a1f-695c23c0d995/kube-rbac-proxy/0.log" Oct 06 13:22:45 crc kubenswrapper[4698]: I1006 13:22:45.985064 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-p5rgb_c03f5f3c-6e6c-4eba-9a1f-695c23c0d995/manager/0.log" Oct 06 13:22:46 crc kubenswrapper[4698]: I1006 13:22:46.036450 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-4zdhn_d6f6350d-b33d-4ac5-b364-c80145b4b742/kube-rbac-proxy/0.log" Oct 06 13:22:46 crc kubenswrapper[4698]: I1006 13:22:46.085725 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-4zdhn_d6f6350d-b33d-4ac5-b364-c80145b4b742/manager/0.log" Oct 06 13:22:46 crc kubenswrapper[4698]: I1006 13:22:46.221369 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-25w45_b632a477-335c-4b0e-a83e-3812409b8afa/kube-rbac-proxy/0.log" Oct 06 13:22:46 crc kubenswrapper[4698]: I1006 13:22:46.286437 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-25w45_b632a477-335c-4b0e-a83e-3812409b8afa/manager/0.log" Oct 06 13:22:46 crc kubenswrapper[4698]: I1006 13:22:46.434343 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-swz27_9543eb0d-82ab-4599-b094-8789588846af/kube-rbac-proxy/0.log" Oct 06 13:22:46 crc kubenswrapper[4698]: I1006 13:22:46.440432 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-swz27_9543eb0d-82ab-4599-b094-8789588846af/manager/0.log" Oct 06 13:22:46 crc kubenswrapper[4698]: I1006 13:22:46.480127 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-hg86j_9d910961-2283-4129-a2e0-6cec10da5779/kube-rbac-proxy/0.log" Oct 06 13:22:46 crc kubenswrapper[4698]: I1006 13:22:46.650844 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-w5cv6_fa0c0f93-841b-4e62-becb-32dcf40ae439/kube-rbac-proxy/0.log" Oct 06 13:22:46 crc kubenswrapper[4698]: I1006 13:22:46.656003 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-hg86j_9d910961-2283-4129-a2e0-6cec10da5779/manager/0.log" Oct 06 13:22:46 crc kubenswrapper[4698]: I1006 13:22:46.805858 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-w5cv6_fa0c0f93-841b-4e62-becb-32dcf40ae439/manager/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.015872 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-rgxcq_38d45acb-51da-4535-a6a8-a317360f96fd/kube-rbac-proxy/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.112401 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-rgxcq_38d45acb-51da-4535-a6a8-a317360f96fd/manager/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.188890 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-j9bnp_56e863a6-f963-4d2f-9de6-7805ff14e90a/kube-rbac-proxy/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.257520 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-j9bnp_56e863a6-f963-4d2f-9de6-7805ff14e90a/manager/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.391116 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp_744f45cb-8563-4bf2-90f1-59f2caa1e4f4/kube-rbac-proxy/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.472529 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp_744f45cb-8563-4bf2-90f1-59f2caa1e4f4/manager/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.505184 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-767bcbdf69-tr7dh_7c1cbc98-12c2-409b-b673-0f3df8edd0fc/kube-rbac-proxy/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.710907 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5dfb49f657-vf592_5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae/kube-rbac-proxy/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.842201 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-spxxt_9b16e745-42a3-4aaf-ad06-dab67bab9ce7/registry-server/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.877206 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5dfb49f657-vf592_5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae/operator/0.log" Oct 06 13:22:47 crc kubenswrapper[4698]: I1006 13:22:47.977311 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6d8b6f9b9-c8sxb_437c5088-93d6-4331-8671-e4e537e553a7/kube-rbac-proxy/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.139821 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6d8b6f9b9-c8sxb_437c5088-93d6-4331-8671-e4e537e553a7/manager/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.157270 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-vwcqq_6e97841f-b15e-4834-a445-d2a632d7021a/kube-rbac-proxy/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.278391 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-vwcqq_6e97841f-b15e-4834-a445-d2a632d7021a/manager/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.383186 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-85kfz_572054de-889b-43ac-abb2-8bca55810d18/operator/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.550857 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-496mk_782cf4ae-9b34-46e9-9bfc-c7da6118c2dc/kube-rbac-proxy/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.622183 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-r6ntv_6cd7d60a-943c-42e8-9b96-74e76f1338f6/kube-rbac-proxy/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.674532 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-767bcbdf69-tr7dh_7c1cbc98-12c2-409b-b673-0f3df8edd0fc/manager/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.679991 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-496mk_782cf4ae-9b34-46e9-9bfc-c7da6118c2dc/manager/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.791916 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-pnks4_6fdb9f18-6759-435a-bae6-90271f8da5b0/kube-rbac-proxy/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.935171 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-pnks4_6fdb9f18-6759-435a-bae6-90271f8da5b0/manager/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.938910 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-r6ntv_6cd7d60a-943c-42e8-9b96-74e76f1338f6/manager/0.log" Oct 06 13:22:48 crc kubenswrapper[4698]: I1006 13:22:48.986685 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-99f6c4584-gxz2f_33858802-bf6b-42d2-bdc6-8ec2202dd1fe/kube-rbac-proxy/0.log" Oct 06 13:22:49 crc kubenswrapper[4698]: I1006 13:22:49.091557 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-99f6c4584-gxz2f_33858802-bf6b-42d2-bdc6-8ec2202dd1fe/manager/0.log" Oct 06 13:22:50 crc kubenswrapper[4698]: I1006 13:22:50.328999 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:22:50 crc kubenswrapper[4698]: E1006 13:22:50.329394 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:23:01 crc kubenswrapper[4698]: I1006 13:23:01.330257 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:23:01 crc kubenswrapper[4698]: E1006 13:23:01.331210 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:23:05 crc kubenswrapper[4698]: I1006 13:23:05.431519 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2j94r_c51a9b0f-7c30-4d46-8b1c-f248ce31b955/control-plane-machine-set-operator/0.log" Oct 06 13:23:05 crc kubenswrapper[4698]: I1006 13:23:05.600158 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6dhbx_eef5ed90-dd02-478f-8038-4970199b1cac/machine-api-operator/0.log" Oct 06 13:23:05 crc kubenswrapper[4698]: I1006 13:23:05.604065 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6dhbx_eef5ed90-dd02-478f-8038-4970199b1cac/kube-rbac-proxy/0.log" Oct 06 13:23:14 crc kubenswrapper[4698]: I1006 13:23:14.329987 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:23:14 crc kubenswrapper[4698]: E1006 13:23:14.331170 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:23:18 crc kubenswrapper[4698]: I1006 13:23:18.387828 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-wpvgh_c57ea6be-96d1-4d4f-8c49-94ee240a5482/cert-manager-controller/0.log" Oct 06 13:23:18 crc kubenswrapper[4698]: I1006 13:23:18.555735 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-xncst_b73b818b-7d2e-4c3f-9622-77ee5c1fc72d/cert-manager-cainjector/0.log" Oct 06 13:23:18 crc kubenswrapper[4698]: I1006 13:23:18.590596 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-sbjmt_be67c15a-01a0-435f-995b-f61cd109d8c8/cert-manager-webhook/0.log" Oct 06 13:23:26 crc kubenswrapper[4698]: I1006 13:23:26.329163 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:23:26 crc kubenswrapper[4698]: E1006 13:23:26.329893 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:23:31 crc kubenswrapper[4698]: I1006 13:23:31.448565 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-4b488_d1028a8b-c391-4df6-978c-83a168615335/nmstate-console-plugin/0.log" Oct 06 13:23:31 crc kubenswrapper[4698]: I1006 13:23:31.565906 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-kgvmq_e62b6cf6-ece4-46f0-9aba-887633daf472/nmstate-handler/0.log" Oct 06 13:23:31 crc kubenswrapper[4698]: I1006 13:23:31.614950 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-hvpml_3fd88842-bd7f-4a22-9289-55f917571cbf/kube-rbac-proxy/0.log" Oct 06 13:23:31 crc kubenswrapper[4698]: I1006 13:23:31.682748 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-hvpml_3fd88842-bd7f-4a22-9289-55f917571cbf/nmstate-metrics/0.log" Oct 06 13:23:31 crc kubenswrapper[4698]: I1006 13:23:31.754351 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-dlzmx_143963ef-7761-472a-b173-7407f5b7befb/nmstate-operator/0.log" Oct 06 13:23:31 crc kubenswrapper[4698]: I1006 13:23:31.903155 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-w6q8p_90215016-8b6e-445d-a43a-d87dafd57bf2/nmstate-webhook/0.log" Oct 06 13:23:38 crc kubenswrapper[4698]: I1006 13:23:38.330220 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:23:38 crc kubenswrapper[4698]: E1006 13:23:38.330954 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:23:47 crc kubenswrapper[4698]: I1006 13:23:47.344077 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-m6ns6_6fe4b880-4427-41e6-96d1-50cbc874aa6b/kube-rbac-proxy/0.log" Oct 06 13:23:47 crc kubenswrapper[4698]: I1006 13:23:47.461481 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-m6ns6_6fe4b880-4427-41e6-96d1-50cbc874aa6b/controller/0.log" Oct 06 13:23:47 crc kubenswrapper[4698]: I1006 13:23:47.518808 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-frr-files/0.log" Oct 06 13:23:47 crc kubenswrapper[4698]: I1006 13:23:47.702304 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-frr-files/0.log" Oct 06 13:23:47 crc kubenswrapper[4698]: I1006 13:23:47.710328 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-metrics/0.log" Oct 06 13:23:47 crc kubenswrapper[4698]: I1006 13:23:47.738254 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-reloader/0.log" Oct 06 13:23:47 crc kubenswrapper[4698]: I1006 13:23:47.759186 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-reloader/0.log" Oct 06 13:23:47 crc kubenswrapper[4698]: I1006 13:23:47.942052 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-metrics/0.log" Oct 06 13:23:47 crc kubenswrapper[4698]: I1006 13:23:47.943374 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-reloader/0.log" Oct 06 13:23:47 crc kubenswrapper[4698]: I1006 13:23:47.991804 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-frr-files/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.008070 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-metrics/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.177138 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-frr-files/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.196720 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-metrics/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.213627 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/controller/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.218599 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-reloader/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.426650 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/frr-metrics/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.562536 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/kube-rbac-proxy/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.588940 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/kube-rbac-proxy-frr/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.625236 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/reloader/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.762007 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-vrr9z_90b467bf-2ac1-461a-83d5-db35ed92d625/frr-k8s-webhook-server/0.log" Oct 06 13:23:48 crc kubenswrapper[4698]: I1006 13:23:48.996759 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-84678b9ffd-5hzgg_9c5b34b7-49db-4807-8a96-dec961b07948/manager/0.log" Oct 06 13:23:49 crc kubenswrapper[4698]: I1006 13:23:49.150495 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-64685f8694-khhpz_70df7ae7-15a5-42ad-8db5-728081b24cd9/webhook-server/0.log" Oct 06 13:23:49 crc kubenswrapper[4698]: I1006 13:23:49.320655 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hb6mj_30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5/kube-rbac-proxy/0.log" Oct 06 13:23:49 crc kubenswrapper[4698]: I1006 13:23:49.888636 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hb6mj_30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5/speaker/0.log" Oct 06 13:23:50 crc kubenswrapper[4698]: I1006 13:23:50.274525 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/frr/0.log" Oct 06 13:23:51 crc kubenswrapper[4698]: I1006 13:23:51.329383 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:23:51 crc kubenswrapper[4698]: E1006 13:23:51.330034 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:24:02 crc kubenswrapper[4698]: I1006 13:24:02.329605 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:24:02 crc kubenswrapper[4698]: E1006 13:24:02.330366 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.026352 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/util/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.108519 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/util/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.149238 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/pull/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.169252 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/pull/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.317329 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/pull/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.335103 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/util/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.398253 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/extract/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.487392 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/util/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.739754 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/util/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.755983 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/pull/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.756076 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/pull/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.940251 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/util/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.956626 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/pull/0.log" Oct 06 13:24:03 crc kubenswrapper[4698]: I1006 13:24:03.966635 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/extract/0.log" Oct 06 13:24:04 crc kubenswrapper[4698]: I1006 13:24:04.100683 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-utilities/0.log" Oct 06 13:24:04 crc kubenswrapper[4698]: I1006 13:24:04.284542 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-utilities/0.log" Oct 06 13:24:04 crc kubenswrapper[4698]: I1006 13:24:04.301806 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-content/0.log" Oct 06 13:24:04 crc kubenswrapper[4698]: I1006 13:24:04.310736 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-content/0.log" Oct 06 13:24:04 crc kubenswrapper[4698]: I1006 13:24:04.452239 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-utilities/0.log" Oct 06 13:24:04 crc kubenswrapper[4698]: I1006 13:24:04.477370 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-content/0.log" Oct 06 13:24:04 crc kubenswrapper[4698]: I1006 13:24:04.721366 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-utilities/0.log" Oct 06 13:24:04 crc kubenswrapper[4698]: I1006 13:24:04.881650 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-content/0.log" Oct 06 13:24:04 crc kubenswrapper[4698]: I1006 13:24:04.886391 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-utilities/0.log" Oct 06 13:24:04 crc kubenswrapper[4698]: I1006 13:24:04.982821 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-content/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.060849 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/registry-server/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.132385 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-content/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.133221 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-utilities/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.301737 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/util/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.560345 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/pull/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.611684 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/util/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.641207 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/pull/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.830263 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/util/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.838355 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/pull/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.887290 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/extract/0.log" Oct 06 13:24:05 crc kubenswrapper[4698]: I1006 13:24:05.925824 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/registry-server/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.040873 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sbkqv_debcb559-cc53-4d24-9eb0-233c76c3cab1/marketplace-operator/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.089267 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-utilities/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.320827 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-utilities/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.330422 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-content/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.337165 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-content/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.493134 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-utilities/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.496229 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-content/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.502862 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-utilities/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.715398 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/registry-server/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.774682 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-content/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.779253 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-utilities/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.823389 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-content/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.964650 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-utilities/0.log" Oct 06 13:24:06 crc kubenswrapper[4698]: I1006 13:24:06.997728 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-content/0.log" Oct 06 13:24:07 crc kubenswrapper[4698]: I1006 13:24:07.619240 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/registry-server/0.log" Oct 06 13:24:13 crc kubenswrapper[4698]: I1006 13:24:13.340286 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:24:13 crc kubenswrapper[4698]: E1006 13:24:13.340912 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:24:20 crc kubenswrapper[4698]: I1006 13:24:20.118272 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-7c8cf85677-4vjwn_795598bd-9625-48d4-8b2b-9d5d5418391a/prometheus-operator/0.log" Oct 06 13:24:20 crc kubenswrapper[4698]: I1006 13:24:20.294628 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx_d4be74b3-b8b9-45af-b971-bd29e82d0879/prometheus-operator-admission-webhook/0.log" Oct 06 13:24:20 crc kubenswrapper[4698]: I1006 13:24:20.366954 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j_2986e2db-d42d-417a-b203-1eb36ae90468/prometheus-operator-admission-webhook/0.log" Oct 06 13:24:20 crc kubenswrapper[4698]: I1006 13:24:20.519966 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-cc5f78dfc-9sp88_c711bfcd-11d2-4ad7-8059-9f1f406dd064/operator/0.log" Oct 06 13:24:20 crc kubenswrapper[4698]: I1006 13:24:20.542903 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-54bc95c9fb-gl2vd_bc83ee37-67c1-4393-83c8-9ee46b2c1d30/perses-operator/0.log" Oct 06 13:24:26 crc kubenswrapper[4698]: I1006 13:24:26.329398 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:24:27 crc kubenswrapper[4698]: I1006 13:24:27.531048 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"46f91cceb68a1980b09f2f322753209cb8d7a9cd9f5e890a7b0118ac4a87047e"} Oct 06 13:25:45 crc kubenswrapper[4698]: I1006 13:25:45.453172 4698 scope.go:117] "RemoveContainer" containerID="0fb37890707b463cbb0041fde38f775b0daa4cb211770e94d6152044f4356697" Oct 06 13:25:45 crc kubenswrapper[4698]: I1006 13:25:45.484761 4698 scope.go:117] "RemoveContainer" containerID="08f57b7c88170a0d863146126ee27c774d8c9f3ed9a52d9004b09cf824fb1836" Oct 06 13:25:45 crc kubenswrapper[4698]: I1006 13:25:45.541814 4698 scope.go:117] "RemoveContainer" containerID="cedc82aa78a24193b4c068cb3b32af610963efd6064f940946136bc43a09f412" Oct 06 13:26:30 crc kubenswrapper[4698]: I1006 13:26:30.991612 4698 generic.go:334] "Generic (PLEG): container finished" podID="d83fa784-e87e-4c0c-a670-d2001afca26e" containerID="2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd" exitCode=0 Oct 06 13:26:30 crc kubenswrapper[4698]: I1006 13:26:30.991681 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-htmrk/must-gather-c54sp" event={"ID":"d83fa784-e87e-4c0c-a670-d2001afca26e","Type":"ContainerDied","Data":"2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd"} Oct 06 13:26:30 crc kubenswrapper[4698]: I1006 13:26:30.993649 4698 scope.go:117] "RemoveContainer" containerID="2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd" Oct 06 13:26:31 crc kubenswrapper[4698]: I1006 13:26:31.100469 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-htmrk_must-gather-c54sp_d83fa784-e87e-4c0c-a670-d2001afca26e/gather/0.log" Oct 06 13:26:39 crc kubenswrapper[4698]: I1006 13:26:39.952629 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-htmrk/must-gather-c54sp"] Oct 06 13:26:39 crc kubenswrapper[4698]: I1006 13:26:39.954292 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-htmrk/must-gather-c54sp" podUID="d83fa784-e87e-4c0c-a670-d2001afca26e" containerName="copy" containerID="cri-o://4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf" gracePeriod=2 Oct 06 13:26:39 crc kubenswrapper[4698]: I1006 13:26:39.964489 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-htmrk/must-gather-c54sp"] Oct 06 13:26:40 crc kubenswrapper[4698]: I1006 13:26:40.406246 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-htmrk_must-gather-c54sp_d83fa784-e87e-4c0c-a670-d2001afca26e/copy/0.log" Oct 06 13:26:40 crc kubenswrapper[4698]: I1006 13:26:40.406908 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/must-gather-c54sp" Oct 06 13:26:40 crc kubenswrapper[4698]: I1006 13:26:40.446895 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d83fa784-e87e-4c0c-a670-d2001afca26e-must-gather-output\") pod \"d83fa784-e87e-4c0c-a670-d2001afca26e\" (UID: \"d83fa784-e87e-4c0c-a670-d2001afca26e\") " Oct 06 13:26:40 crc kubenswrapper[4698]: I1006 13:26:40.446967 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7llkj\" (UniqueName: \"kubernetes.io/projected/d83fa784-e87e-4c0c-a670-d2001afca26e-kube-api-access-7llkj\") pod \"d83fa784-e87e-4c0c-a670-d2001afca26e\" (UID: \"d83fa784-e87e-4c0c-a670-d2001afca26e\") " Oct 06 13:26:40 crc kubenswrapper[4698]: I1006 13:26:40.453798 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83fa784-e87e-4c0c-a670-d2001afca26e-kube-api-access-7llkj" (OuterVolumeSpecName: "kube-api-access-7llkj") pod "d83fa784-e87e-4c0c-a670-d2001afca26e" (UID: "d83fa784-e87e-4c0c-a670-d2001afca26e"). InnerVolumeSpecName "kube-api-access-7llkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:26:40 crc kubenswrapper[4698]: I1006 13:26:40.549957 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7llkj\" (UniqueName: \"kubernetes.io/projected/d83fa784-e87e-4c0c-a670-d2001afca26e-kube-api-access-7llkj\") on node \"crc\" DevicePath \"\"" Oct 06 13:26:40 crc kubenswrapper[4698]: I1006 13:26:40.671293 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83fa784-e87e-4c0c-a670-d2001afca26e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d83fa784-e87e-4c0c-a670-d2001afca26e" (UID: "d83fa784-e87e-4c0c-a670-d2001afca26e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:26:40 crc kubenswrapper[4698]: I1006 13:26:40.755388 4698 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d83fa784-e87e-4c0c-a670-d2001afca26e-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 06 13:26:41 crc kubenswrapper[4698]: I1006 13:26:41.099562 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-htmrk_must-gather-c54sp_d83fa784-e87e-4c0c-a670-d2001afca26e/copy/0.log" Oct 06 13:26:41 crc kubenswrapper[4698]: I1006 13:26:41.100073 4698 generic.go:334] "Generic (PLEG): container finished" podID="d83fa784-e87e-4c0c-a670-d2001afca26e" containerID="4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf" exitCode=143 Oct 06 13:26:41 crc kubenswrapper[4698]: I1006 13:26:41.100127 4698 scope.go:117] "RemoveContainer" containerID="4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf" Oct 06 13:26:41 crc kubenswrapper[4698]: I1006 13:26:41.100143 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-htmrk/must-gather-c54sp" Oct 06 13:26:41 crc kubenswrapper[4698]: I1006 13:26:41.137562 4698 scope.go:117] "RemoveContainer" containerID="2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd" Oct 06 13:26:41 crc kubenswrapper[4698]: I1006 13:26:41.197387 4698 scope.go:117] "RemoveContainer" containerID="4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf" Oct 06 13:26:41 crc kubenswrapper[4698]: E1006 13:26:41.200746 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf\": container with ID starting with 4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf not found: ID does not exist" containerID="4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf" Oct 06 13:26:41 crc kubenswrapper[4698]: I1006 13:26:41.200795 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf"} err="failed to get container status \"4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf\": rpc error: code = NotFound desc = could not find container \"4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf\": container with ID starting with 4d2214ee1f0f30c177952c6bb9298b1388a1555a80cd2d604f5bc7c549b737cf not found: ID does not exist" Oct 06 13:26:41 crc kubenswrapper[4698]: I1006 13:26:41.200821 4698 scope.go:117] "RemoveContainer" containerID="2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd" Oct 06 13:26:41 crc kubenswrapper[4698]: E1006 13:26:41.202654 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd\": container with ID starting with 2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd not found: ID does not exist" containerID="2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd" Oct 06 13:26:41 crc kubenswrapper[4698]: I1006 13:26:41.202727 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd"} err="failed to get container status \"2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd\": rpc error: code = NotFound desc = could not find container \"2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd\": container with ID starting with 2e550b2efdaad0f474b7380543748aae908649d18dea0f9e8f7e17e2aaec6fdd not found: ID does not exist" Oct 06 13:26:41 crc kubenswrapper[4698]: I1006 13:26:41.340444 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83fa784-e87e-4c0c-a670-d2001afca26e" path="/var/lib/kubelet/pods/d83fa784-e87e-4c0c-a670-d2001afca26e/volumes" Oct 06 13:26:45 crc kubenswrapper[4698]: I1006 13:26:45.616213 4698 scope.go:117] "RemoveContainer" containerID="11f09922956c3a20d8fd17ef38f97200ec74e9a00d3ecf365b5340971fbe6723" Oct 06 13:26:55 crc kubenswrapper[4698]: I1006 13:26:55.235005 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:26:55 crc kubenswrapper[4698]: I1006 13:26:55.235572 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.661687 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wfkwq/must-gather-chg92"] Oct 06 13:27:20 crc kubenswrapper[4698]: E1006 13:27:20.662722 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83fa784-e87e-4c0c-a670-d2001afca26e" containerName="copy" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.662741 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83fa784-e87e-4c0c-a670-d2001afca26e" containerName="copy" Oct 06 13:27:20 crc kubenswrapper[4698]: E1006 13:27:20.662793 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83fa784-e87e-4c0c-a670-d2001afca26e" containerName="gather" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.662801 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83fa784-e87e-4c0c-a670-d2001afca26e" containerName="gather" Oct 06 13:27:20 crc kubenswrapper[4698]: E1006 13:27:20.662830 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfaf622e-8d10-4769-bb32-f1b4677f9b20" containerName="container-00" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.662838 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfaf622e-8d10-4769-bb32-f1b4677f9b20" containerName="container-00" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.663071 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83fa784-e87e-4c0c-a670-d2001afca26e" containerName="gather" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.663090 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83fa784-e87e-4c0c-a670-d2001afca26e" containerName="copy" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.663120 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfaf622e-8d10-4769-bb32-f1b4677f9b20" containerName="container-00" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.664445 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/must-gather-chg92" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.675666 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wfkwq"/"default-dockercfg-h8j7x" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.678638 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wfkwq"/"kube-root-ca.crt" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.678924 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wfkwq"/"openshift-service-ca.crt" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.687364 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wfkwq/must-gather-chg92"] Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.844104 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ceccbe49-690a-417f-9270-ae954f09dc6d-must-gather-output\") pod \"must-gather-chg92\" (UID: \"ceccbe49-690a-417f-9270-ae954f09dc6d\") " pod="openshift-must-gather-wfkwq/must-gather-chg92" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.844501 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bd8t\" (UniqueName: \"kubernetes.io/projected/ceccbe49-690a-417f-9270-ae954f09dc6d-kube-api-access-8bd8t\") pod \"must-gather-chg92\" (UID: \"ceccbe49-690a-417f-9270-ae954f09dc6d\") " pod="openshift-must-gather-wfkwq/must-gather-chg92" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.946163 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bd8t\" (UniqueName: \"kubernetes.io/projected/ceccbe49-690a-417f-9270-ae954f09dc6d-kube-api-access-8bd8t\") pod \"must-gather-chg92\" (UID: \"ceccbe49-690a-417f-9270-ae954f09dc6d\") " pod="openshift-must-gather-wfkwq/must-gather-chg92" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.946331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ceccbe49-690a-417f-9270-ae954f09dc6d-must-gather-output\") pod \"must-gather-chg92\" (UID: \"ceccbe49-690a-417f-9270-ae954f09dc6d\") " pod="openshift-must-gather-wfkwq/must-gather-chg92" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.946725 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ceccbe49-690a-417f-9270-ae954f09dc6d-must-gather-output\") pod \"must-gather-chg92\" (UID: \"ceccbe49-690a-417f-9270-ae954f09dc6d\") " pod="openshift-must-gather-wfkwq/must-gather-chg92" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.965643 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bd8t\" (UniqueName: \"kubernetes.io/projected/ceccbe49-690a-417f-9270-ae954f09dc6d-kube-api-access-8bd8t\") pod \"must-gather-chg92\" (UID: \"ceccbe49-690a-417f-9270-ae954f09dc6d\") " pod="openshift-must-gather-wfkwq/must-gather-chg92" Oct 06 13:27:20 crc kubenswrapper[4698]: I1006 13:27:20.987339 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/must-gather-chg92" Oct 06 13:27:21 crc kubenswrapper[4698]: I1006 13:27:21.462758 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wfkwq/must-gather-chg92"] Oct 06 13:27:21 crc kubenswrapper[4698]: I1006 13:27:21.558550 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/must-gather-chg92" event={"ID":"ceccbe49-690a-417f-9270-ae954f09dc6d","Type":"ContainerStarted","Data":"41fd51a839ff99e168d3d0070314622b35e039ab24acd0a8f552c0fb89ff8d34"} Oct 06 13:27:22 crc kubenswrapper[4698]: I1006 13:27:22.569369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/must-gather-chg92" event={"ID":"ceccbe49-690a-417f-9270-ae954f09dc6d","Type":"ContainerStarted","Data":"516a1151609b425dd5381485ee1a155da89e6478e0b9ccbfa2e7845e9afa4fd8"} Oct 06 13:27:22 crc kubenswrapper[4698]: I1006 13:27:22.569782 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/must-gather-chg92" event={"ID":"ceccbe49-690a-417f-9270-ae954f09dc6d","Type":"ContainerStarted","Data":"2abef078383b079873e3b2ce34b108d1705c65053bd9b553684856d8dacef50e"} Oct 06 13:27:22 crc kubenswrapper[4698]: I1006 13:27:22.627010 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wfkwq/must-gather-chg92" podStartSLOduration=2.626827417 podStartE2EDuration="2.626827417s" podCreationTimestamp="2025-10-06 13:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:27:22.619387423 +0000 UTC m=+6130.032079616" watchObservedRunningTime="2025-10-06 13:27:22.626827417 +0000 UTC m=+6130.039519600" Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.200088 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wfkwq/crc-debug-68mml"] Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.201838 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-68mml" Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.235256 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.235610 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.356117 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37ae86d9-05fc-4cae-b143-b0204cb21893-host\") pod \"crc-debug-68mml\" (UID: \"37ae86d9-05fc-4cae-b143-b0204cb21893\") " pod="openshift-must-gather-wfkwq/crc-debug-68mml" Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.356192 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x4qn\" (UniqueName: \"kubernetes.io/projected/37ae86d9-05fc-4cae-b143-b0204cb21893-kube-api-access-4x4qn\") pod \"crc-debug-68mml\" (UID: \"37ae86d9-05fc-4cae-b143-b0204cb21893\") " pod="openshift-must-gather-wfkwq/crc-debug-68mml" Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.459162 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37ae86d9-05fc-4cae-b143-b0204cb21893-host\") pod \"crc-debug-68mml\" (UID: \"37ae86d9-05fc-4cae-b143-b0204cb21893\") " pod="openshift-must-gather-wfkwq/crc-debug-68mml" Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.459214 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x4qn\" (UniqueName: \"kubernetes.io/projected/37ae86d9-05fc-4cae-b143-b0204cb21893-kube-api-access-4x4qn\") pod \"crc-debug-68mml\" (UID: \"37ae86d9-05fc-4cae-b143-b0204cb21893\") " pod="openshift-must-gather-wfkwq/crc-debug-68mml" Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.459335 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37ae86d9-05fc-4cae-b143-b0204cb21893-host\") pod \"crc-debug-68mml\" (UID: \"37ae86d9-05fc-4cae-b143-b0204cb21893\") " pod="openshift-must-gather-wfkwq/crc-debug-68mml" Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.477369 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x4qn\" (UniqueName: \"kubernetes.io/projected/37ae86d9-05fc-4cae-b143-b0204cb21893-kube-api-access-4x4qn\") pod \"crc-debug-68mml\" (UID: \"37ae86d9-05fc-4cae-b143-b0204cb21893\") " pod="openshift-must-gather-wfkwq/crc-debug-68mml" Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.526870 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-68mml" Oct 06 13:27:25 crc kubenswrapper[4698]: I1006 13:27:25.593353 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/crc-debug-68mml" event={"ID":"37ae86d9-05fc-4cae-b143-b0204cb21893","Type":"ContainerStarted","Data":"2d44e76486587d3d5b6f4722a83ce18965c0fbe863389e2f8733a6a21d71b8c6"} Oct 06 13:27:26 crc kubenswrapper[4698]: I1006 13:27:26.607372 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/crc-debug-68mml" event={"ID":"37ae86d9-05fc-4cae-b143-b0204cb21893","Type":"ContainerStarted","Data":"5f7ef413f32e5d2a1fa996323814e68da2bf3c2673bc755eda82a52aad1a54f9"} Oct 06 13:27:26 crc kubenswrapper[4698]: I1006 13:27:26.631507 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wfkwq/crc-debug-68mml" podStartSLOduration=1.63148212 podStartE2EDuration="1.63148212s" podCreationTimestamp="2025-10-06 13:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:27:26.62056258 +0000 UTC m=+6134.033254763" watchObservedRunningTime="2025-10-06 13:27:26.63148212 +0000 UTC m=+6134.044174303" Oct 06 13:27:45 crc kubenswrapper[4698]: I1006 13:27:45.723094 4698 scope.go:117] "RemoveContainer" containerID="c45822fdb56374dfd293be702dfdd9f829e270247881b564e0f8b714dd5a17f0" Oct 06 13:27:45 crc kubenswrapper[4698]: I1006 13:27:45.752633 4698 scope.go:117] "RemoveContainer" containerID="1735f58a6c1b2cda1237224429a96fecc36f3176be51f1bf57c15b71f810d696" Oct 06 13:27:45 crc kubenswrapper[4698]: I1006 13:27:45.773728 4698 scope.go:117] "RemoveContainer" containerID="a66a0082bd39abe4b59f7aa5d7e7353691ab503ce594e82f082c4e14cb12fb53" Oct 06 13:27:55 crc kubenswrapper[4698]: I1006 13:27:55.234910 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:27:55 crc kubenswrapper[4698]: I1006 13:27:55.235577 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:27:55 crc kubenswrapper[4698]: I1006 13:27:55.235634 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 13:27:55 crc kubenswrapper[4698]: I1006 13:27:55.236366 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46f91cceb68a1980b09f2f322753209cb8d7a9cd9f5e890a7b0118ac4a87047e"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:27:55 crc kubenswrapper[4698]: I1006 13:27:55.236420 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://46f91cceb68a1980b09f2f322753209cb8d7a9cd9f5e890a7b0118ac4a87047e" gracePeriod=600 Oct 06 13:27:55 crc kubenswrapper[4698]: I1006 13:27:55.923863 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="46f91cceb68a1980b09f2f322753209cb8d7a9cd9f5e890a7b0118ac4a87047e" exitCode=0 Oct 06 13:27:55 crc kubenswrapper[4698]: I1006 13:27:55.924002 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"46f91cceb68a1980b09f2f322753209cb8d7a9cd9f5e890a7b0118ac4a87047e"} Oct 06 13:27:55 crc kubenswrapper[4698]: I1006 13:27:55.924277 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerStarted","Data":"e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71"} Oct 06 13:27:55 crc kubenswrapper[4698]: I1006 13:27:55.924299 4698 scope.go:117] "RemoveContainer" containerID="b0f22369546a84f228867def5f5102626bc3f0dae84a1cd03d3e417bc7073846" Oct 06 13:28:45 crc kubenswrapper[4698]: I1006 13:28:45.863822 4698 scope.go:117] "RemoveContainer" containerID="4d19786a6933bdcd0c6f0f3903e24276a93f4bba4eabbec4b59b8203f49ea2f5" Oct 06 13:28:46 crc kubenswrapper[4698]: I1006 13:28:46.828768 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6857b4f974-dqhrx_61b610ef-3459-4cf9-9328-d1f95d01be7a/barbican-api/0.log" Oct 06 13:28:46 crc kubenswrapper[4698]: I1006 13:28:46.903434 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6857b4f974-dqhrx_61b610ef-3459-4cf9-9328-d1f95d01be7a/barbican-api-log/0.log" Oct 06 13:28:47 crc kubenswrapper[4698]: I1006 13:28:47.051043 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-746dff6454-x5fd6_fc5e62c6-2df3-4629-831b-a2342fef2343/barbican-keystone-listener/0.log" Oct 06 13:28:47 crc kubenswrapper[4698]: I1006 13:28:47.108838 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-746dff6454-x5fd6_fc5e62c6-2df3-4629-831b-a2342fef2343/barbican-keystone-listener-log/0.log" Oct 06 13:28:47 crc kubenswrapper[4698]: I1006 13:28:47.284803 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58d8667c5c-dbf84_fa92b339-0782-432a-a352-5a0718033683/barbican-worker/0.log" Oct 06 13:28:47 crc kubenswrapper[4698]: I1006 13:28:47.302362 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58d8667c5c-dbf84_fa92b339-0782-432a-a352-5a0718033683/barbican-worker-log/0.log" Oct 06 13:28:47 crc kubenswrapper[4698]: I1006 13:28:47.491383 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-885pc_7a9dbb12-cd2b-4f3a-a602-35ae29132726/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:47 crc kubenswrapper[4698]: I1006 13:28:47.733543 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_71e624a3-d6ee-458b-be82-fcc805fbc29b/ceilometer-central-agent/0.log" Oct 06 13:28:47 crc kubenswrapper[4698]: I1006 13:28:47.753620 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_71e624a3-d6ee-458b-be82-fcc805fbc29b/ceilometer-notification-agent/0.log" Oct 06 13:28:47 crc kubenswrapper[4698]: I1006 13:28:47.791155 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_71e624a3-d6ee-458b-be82-fcc805fbc29b/proxy-httpd/0.log" Oct 06 13:28:47 crc kubenswrapper[4698]: I1006 13:28:47.942126 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_71e624a3-d6ee-458b-be82-fcc805fbc29b/sg-core/0.log" Oct 06 13:28:48 crc kubenswrapper[4698]: I1006 13:28:48.073087 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b5496d3c-491b-4f5d-8351-2e7eac348fd2/cinder-api/0.log" Oct 06 13:28:48 crc kubenswrapper[4698]: I1006 13:28:48.132736 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b5496d3c-491b-4f5d-8351-2e7eac348fd2/cinder-api-log/0.log" Oct 06 13:28:48 crc kubenswrapper[4698]: I1006 13:28:48.329414 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cd84c444-81fa-4206-8517-a25ba61c7209/cinder-scheduler/0.log" Oct 06 13:28:48 crc kubenswrapper[4698]: I1006 13:28:48.412170 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cd84c444-81fa-4206-8517-a25ba61c7209/probe/0.log" Oct 06 13:28:48 crc kubenswrapper[4698]: I1006 13:28:48.537476 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ltjln_f084d261-7f67-4be1-83b2-7e1c379e0ffe/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:48 crc kubenswrapper[4698]: I1006 13:28:48.711917 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5jclg_166970c1-3e73-47ca-b4c7-ea9c980ce7bb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:48 crc kubenswrapper[4698]: I1006 13:28:48.844936 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-s7ck7_be279c30-e0a4-4828-8e13-2375265bb01f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:48 crc kubenswrapper[4698]: I1006 13:28:48.975183 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-md65p_a49ef859-b876-474a-9cd2-4bab9f43799a/init/0.log" Oct 06 13:28:49 crc kubenswrapper[4698]: I1006 13:28:49.203266 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-md65p_a49ef859-b876-474a-9cd2-4bab9f43799a/init/0.log" Oct 06 13:28:49 crc kubenswrapper[4698]: I1006 13:28:49.326360 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-md65p_a49ef859-b876-474a-9cd2-4bab9f43799a/dnsmasq-dns/0.log" Oct 06 13:28:49 crc kubenswrapper[4698]: I1006 13:28:49.467356 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mfbzx_cb95c9b2-ec91-415c-851c-1d10cd61f0f4/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:49 crc kubenswrapper[4698]: I1006 13:28:49.597970 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8f518e2b-0a37-49eb-83f3-a393139e84c9/glance-httpd/0.log" Oct 06 13:28:49 crc kubenswrapper[4698]: I1006 13:28:49.662511 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8f518e2b-0a37-49eb-83f3-a393139e84c9/glance-log/0.log" Oct 06 13:28:49 crc kubenswrapper[4698]: I1006 13:28:49.798335 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0afe62d1-9751-4c32-820b-770b71e5599f/glance-httpd/0.log" Oct 06 13:28:49 crc kubenswrapper[4698]: I1006 13:28:49.848260 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0afe62d1-9751-4c32-820b-770b71e5599f/glance-log/0.log" Oct 06 13:28:50 crc kubenswrapper[4698]: I1006 13:28:50.053750 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-849d766464-jl8th_2b4da0ff-f7c0-47d2-b204-69c0da4ab453/horizon/0.log" Oct 06 13:28:50 crc kubenswrapper[4698]: I1006 13:28:50.211906 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-dl55v_8624f3b8-45df-4efd-b49f-33838276c948/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:50 crc kubenswrapper[4698]: I1006 13:28:50.294671 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8lz9t_fdcfa9c6-8380-471e-a9bb-1368772713a5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:50 crc kubenswrapper[4698]: I1006 13:28:50.807743 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-849d766464-jl8th_2b4da0ff-f7c0-47d2-b204-69c0da4ab453/horizon-log/0.log" Oct 06 13:28:50 crc kubenswrapper[4698]: I1006 13:28:50.808268 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29329261-9wv7t_4a476195-2a9a-4be4-8199-16903da18935/keystone-cron/0.log" Oct 06 13:28:50 crc kubenswrapper[4698]: I1006 13:28:50.978048 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c2b3ac80-8153-430c-893a-21c4cc2f2a5d/kube-state-metrics/0.log" Oct 06 13:28:51 crc kubenswrapper[4698]: I1006 13:28:51.093773 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-658b97bb55-lp7jm_59515f7e-0c54-4044-8b9a-45f3aebb9870/keystone-api/0.log" Oct 06 13:28:51 crc kubenswrapper[4698]: I1006 13:28:51.095173 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fzwr9_7a102252-962d-4cb3-970b-acd2557e633e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:51 crc kubenswrapper[4698]: I1006 13:28:51.678336 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c87999589-tj5hk_6d4d2004-223b-4b0e-9b88-229437567c01/neutron-api/0.log" Oct 06 13:28:51 crc kubenswrapper[4698]: I1006 13:28:51.689793 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c87999589-tj5hk_6d4d2004-223b-4b0e-9b88-229437567c01/neutron-httpd/0.log" Oct 06 13:28:51 crc kubenswrapper[4698]: I1006 13:28:51.743216 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tm6h_c824e0ef-121e-428f-bf96-f9e1c87e57c6/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:52 crc kubenswrapper[4698]: I1006 13:28:52.740861 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_300ccf8e-2aa0-41c6-be99-b55c56ac8c73/nova-cell0-conductor-conductor/0.log" Oct 06 13:28:53 crc kubenswrapper[4698]: I1006 13:28:53.212969 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_68fa4814-8052-4643-996f-ec7f189756e2/nova-api-log/0.log" Oct 06 13:28:53 crc kubenswrapper[4698]: I1006 13:28:53.441489 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_fdb56a27-9290-42b9-9936-6de34abca79c/nova-cell1-conductor-conductor/0.log" Oct 06 13:28:53 crc kubenswrapper[4698]: I1006 13:28:53.815374 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f0828613-cf15-40b5-9af1-c13b856373bd/nova-cell1-novncproxy-novncproxy/0.log" Oct 06 13:28:53 crc kubenswrapper[4698]: I1006 13:28:53.869830 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_68fa4814-8052-4643-996f-ec7f189756e2/nova-api-api/0.log" Oct 06 13:28:53 crc kubenswrapper[4698]: I1006 13:28:53.949825 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-786mf_9853ba7c-85b2-4a97-ac8c-80be3f979248/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:54 crc kubenswrapper[4698]: I1006 13:28:54.129381 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7c3a05a9-7f25-4408-91b3-0ffa68c55545/nova-metadata-log/0.log" Oct 06 13:28:54 crc kubenswrapper[4698]: I1006 13:28:54.660986 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b6df0e48-e5a1-42b9-a3f9-712a00716e38/mysql-bootstrap/0.log" Oct 06 13:28:54 crc kubenswrapper[4698]: I1006 13:28:54.680584 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d1cd5e9b-2297-4e73-91d5-a1cd00ff8263/nova-scheduler-scheduler/0.log" Oct 06 13:28:54 crc kubenswrapper[4698]: I1006 13:28:54.841738 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b6df0e48-e5a1-42b9-a3f9-712a00716e38/mysql-bootstrap/0.log" Oct 06 13:28:54 crc kubenswrapper[4698]: I1006 13:28:54.942698 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b6df0e48-e5a1-42b9-a3f9-712a00716e38/galera/0.log" Oct 06 13:28:55 crc kubenswrapper[4698]: I1006 13:28:55.213692 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fa86326e-abe0-482b-94db-4579c8dfbc66/mysql-bootstrap/0.log" Oct 06 13:28:55 crc kubenswrapper[4698]: I1006 13:28:55.429010 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fa86326e-abe0-482b-94db-4579c8dfbc66/mysql-bootstrap/0.log" Oct 06 13:28:55 crc kubenswrapper[4698]: I1006 13:28:55.437388 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fa86326e-abe0-482b-94db-4579c8dfbc66/galera/0.log" Oct 06 13:28:55 crc kubenswrapper[4698]: I1006 13:28:55.643972 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_832ec6ae-a05c-4838-93d2-8957d3dcdc6a/openstackclient/0.log" Oct 06 13:28:55 crc kubenswrapper[4698]: I1006 13:28:55.857347 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4ktjh_f7d17b7b-03e7-4379-9c64-57d50be1882c/openstack-network-exporter/0.log" Oct 06 13:28:56 crc kubenswrapper[4698]: I1006 13:28:56.093573 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gx9kq_802f85d7-83b9-4361-ae5e-72d826586a43/ovsdb-server-init/0.log" Oct 06 13:28:56 crc kubenswrapper[4698]: I1006 13:28:56.283312 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gx9kq_802f85d7-83b9-4361-ae5e-72d826586a43/ovsdb-server-init/0.log" Oct 06 13:28:56 crc kubenswrapper[4698]: I1006 13:28:56.391386 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gx9kq_802f85d7-83b9-4361-ae5e-72d826586a43/ovs-vswitchd/0.log" Oct 06 13:28:56 crc kubenswrapper[4698]: I1006 13:28:56.463361 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gx9kq_802f85d7-83b9-4361-ae5e-72d826586a43/ovsdb-server/0.log" Oct 06 13:28:56 crc kubenswrapper[4698]: I1006 13:28:56.658359 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7c3a05a9-7f25-4408-91b3-0ffa68c55545/nova-metadata-metadata/0.log" Oct 06 13:28:56 crc kubenswrapper[4698]: I1006 13:28:56.670918 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qmjmg_7dd3b0e2-4d06-4c91-8539-4db08c7f2d23/ovn-controller/0.log" Oct 06 13:28:56 crc kubenswrapper[4698]: I1006 13:28:56.914427 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-gklpg_4112723d-ae85-4f84-867e-9219f74672ff/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:57 crc kubenswrapper[4698]: I1006 13:28:57.063630 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_711c60fb-212e-45d1-87c3-c15a97c60f90/ovn-northd/0.log" Oct 06 13:28:57 crc kubenswrapper[4698]: I1006 13:28:57.072635 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_711c60fb-212e-45d1-87c3-c15a97c60f90/openstack-network-exporter/0.log" Oct 06 13:28:57 crc kubenswrapper[4698]: I1006 13:28:57.284000 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_513c9b58-394d-48dd-a0c9-7ea2f4643f25/openstack-network-exporter/0.log" Oct 06 13:28:57 crc kubenswrapper[4698]: I1006 13:28:57.304221 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_513c9b58-394d-48dd-a0c9-7ea2f4643f25/ovsdbserver-nb/0.log" Oct 06 13:28:57 crc kubenswrapper[4698]: I1006 13:28:57.477480 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3024f021-f705-443b-a7e1-bcb574c25fe7/ovsdbserver-sb/0.log" Oct 06 13:28:57 crc kubenswrapper[4698]: I1006 13:28:57.497808 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3024f021-f705-443b-a7e1-bcb574c25fe7/openstack-network-exporter/0.log" Oct 06 13:28:58 crc kubenswrapper[4698]: I1006 13:28:58.007109 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-667d6544d-8ddpx_306a4319-6233-4455-85ac-b0c422603faf/placement-api/0.log" Oct 06 13:28:58 crc kubenswrapper[4698]: I1006 13:28:58.059472 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-667d6544d-8ddpx_306a4319-6233-4455-85ac-b0c422603faf/placement-log/0.log" Oct 06 13:28:58 crc kubenswrapper[4698]: I1006 13:28:58.119517 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34fdded6-e8a1-4564-bd6a-9ed17c9e57b5/init-config-reloader/0.log" Oct 06 13:28:58 crc kubenswrapper[4698]: I1006 13:28:58.306152 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34fdded6-e8a1-4564-bd6a-9ed17c9e57b5/prometheus/0.log" Oct 06 13:28:58 crc kubenswrapper[4698]: I1006 13:28:58.317696 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34fdded6-e8a1-4564-bd6a-9ed17c9e57b5/init-config-reloader/0.log" Oct 06 13:28:58 crc kubenswrapper[4698]: I1006 13:28:58.331138 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34fdded6-e8a1-4564-bd6a-9ed17c9e57b5/config-reloader/0.log" Oct 06 13:28:58 crc kubenswrapper[4698]: I1006 13:28:58.548599 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0c4e83e2-715d-4418-a8b2-c4fe36f46192/setup-container/0.log" Oct 06 13:28:58 crc kubenswrapper[4698]: I1006 13:28:58.559378 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34fdded6-e8a1-4564-bd6a-9ed17c9e57b5/thanos-sidecar/0.log" Oct 06 13:28:58 crc kubenswrapper[4698]: I1006 13:28:58.755187 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0c4e83e2-715d-4418-a8b2-c4fe36f46192/setup-container/0.log" Oct 06 13:28:58 crc kubenswrapper[4698]: I1006 13:28:58.831738 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0c4e83e2-715d-4418-a8b2-c4fe36f46192/rabbitmq/0.log" Oct 06 13:28:59 crc kubenswrapper[4698]: I1006 13:28:59.005090 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_770a4197-e506-41c8-921b-31db7abd83fe/setup-container/0.log" Oct 06 13:28:59 crc kubenswrapper[4698]: I1006 13:28:59.240256 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_770a4197-e506-41c8-921b-31db7abd83fe/setup-container/0.log" Oct 06 13:28:59 crc kubenswrapper[4698]: I1006 13:28:59.304701 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_770a4197-e506-41c8-921b-31db7abd83fe/rabbitmq/0.log" Oct 06 13:28:59 crc kubenswrapper[4698]: I1006 13:28:59.432195 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xjc7x_702cd121-45e6-44b8-bdc6-c97634e3307f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:59 crc kubenswrapper[4698]: I1006 13:28:59.681898 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-975wl_1aa6350f-22ad-49c6-b717-6b5db37d7b27/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:59 crc kubenswrapper[4698]: I1006 13:28:59.847618 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tqpqf_4ba434c8-0f2c-42ae-aa0c-21bf3186cfb9/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:59 crc kubenswrapper[4698]: I1006 13:28:59.915450 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gr96r_eab59609-328f-41d0-94e9-0f6bcd78eaa5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:28:59 crc kubenswrapper[4698]: I1006 13:28:59.956854 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_fa023504-b5d3-415a-a98c-8771aac74c06/memcached/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.108404 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-z9v58_2e1a78cf-8260-4c6c-88ed-fa72b63e10a9/ssh-known-hosts-edpm-deployment/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.362436 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7bd5b9f8ff-k9cfq_6900b347-8ed3-4474-b6b1-623471b2a03f/proxy-httpd/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.369383 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-lhjtp_44a2d222-9a03-4483-a9dd-2708e7b3a5c7/swift-ring-rebalance/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.383812 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7bd5b9f8ff-k9cfq_6900b347-8ed3-4474-b6b1-623471b2a03f/proxy-server/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.539163 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/account-reaper/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.554996 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/account-replicator/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.591897 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/account-auditor/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.694028 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/account-server/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.702083 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/container-auditor/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.761557 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/container-server/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.790718 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/container-replicator/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.861914 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/container-updater/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.927845 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/object-auditor/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.969713 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/object-expirer/0.log" Oct 06 13:29:00 crc kubenswrapper[4698]: I1006 13:29:00.972334 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/object-replicator/0.log" Oct 06 13:29:01 crc kubenswrapper[4698]: I1006 13:29:01.046868 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/object-server/0.log" Oct 06 13:29:01 crc kubenswrapper[4698]: I1006 13:29:01.086402 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/object-updater/0.log" Oct 06 13:29:01 crc kubenswrapper[4698]: I1006 13:29:01.117150 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/rsync/0.log" Oct 06 13:29:01 crc kubenswrapper[4698]: I1006 13:29:01.159803 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_240ac959-0487-47d4-b219-7741b2127f50/swift-recon-cron/0.log" Oct 06 13:29:01 crc kubenswrapper[4698]: I1006 13:29:01.259177 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-d5d62_ff7ed42f-2288-48ac-9f89-9305e2f4a151/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:29:01 crc kubenswrapper[4698]: I1006 13:29:01.367371 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_06bf5456-72f4-4eee-a851-c943572e317b/tempest-tests-tempest-tests-runner/0.log" Oct 06 13:29:01 crc kubenswrapper[4698]: I1006 13:29:01.480259 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e9b78c09-1a17-43bd-8c65-27d435435cf8/test-operator-logs-container/0.log" Oct 06 13:29:01 crc kubenswrapper[4698]: I1006 13:29:01.564305 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-q67vc_ecc55e3d-ca7e-41de-9f19-fb1b2857d398/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 13:29:02 crc kubenswrapper[4698]: I1006 13:29:02.312137 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_eac2f2ee-e5e6-4fb9-a527-47976859efe7/watcher-applier/0.log" Oct 06 13:29:02 crc kubenswrapper[4698]: I1006 13:29:02.620115 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_4c9514ee-65e0-4349-af35-8b7a65cf6bb9/watcher-api-log/0.log" Oct 06 13:29:03 crc kubenswrapper[4698]: I1006 13:29:03.351615 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_d8e278fa-3dfa-47dd-82b0-7296cc9ef08d/watcher-decision-engine/0.log" Oct 06 13:29:05 crc kubenswrapper[4698]: I1006 13:29:05.318399 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_4c9514ee-65e0-4349-af35-8b7a65cf6bb9/watcher-api/0.log" Oct 06 13:29:29 crc kubenswrapper[4698]: I1006 13:29:29.883498 4698 generic.go:334] "Generic (PLEG): container finished" podID="37ae86d9-05fc-4cae-b143-b0204cb21893" containerID="5f7ef413f32e5d2a1fa996323814e68da2bf3c2673bc755eda82a52aad1a54f9" exitCode=0 Oct 06 13:29:29 crc kubenswrapper[4698]: I1006 13:29:29.883551 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/crc-debug-68mml" event={"ID":"37ae86d9-05fc-4cae-b143-b0204cb21893","Type":"ContainerDied","Data":"5f7ef413f32e5d2a1fa996323814e68da2bf3c2673bc755eda82a52aad1a54f9"} Oct 06 13:29:30 crc kubenswrapper[4698]: I1006 13:29:30.997162 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-68mml" Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.027861 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wfkwq/crc-debug-68mml"] Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.035985 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wfkwq/crc-debug-68mml"] Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.139997 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x4qn\" (UniqueName: \"kubernetes.io/projected/37ae86d9-05fc-4cae-b143-b0204cb21893-kube-api-access-4x4qn\") pod \"37ae86d9-05fc-4cae-b143-b0204cb21893\" (UID: \"37ae86d9-05fc-4cae-b143-b0204cb21893\") " Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.140315 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37ae86d9-05fc-4cae-b143-b0204cb21893-host\") pod \"37ae86d9-05fc-4cae-b143-b0204cb21893\" (UID: \"37ae86d9-05fc-4cae-b143-b0204cb21893\") " Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.140450 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37ae86d9-05fc-4cae-b143-b0204cb21893-host" (OuterVolumeSpecName: "host") pod "37ae86d9-05fc-4cae-b143-b0204cb21893" (UID: "37ae86d9-05fc-4cae-b143-b0204cb21893"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.140692 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37ae86d9-05fc-4cae-b143-b0204cb21893-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.146419 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ae86d9-05fc-4cae-b143-b0204cb21893-kube-api-access-4x4qn" (OuterVolumeSpecName: "kube-api-access-4x4qn") pod "37ae86d9-05fc-4cae-b143-b0204cb21893" (UID: "37ae86d9-05fc-4cae-b143-b0204cb21893"). InnerVolumeSpecName "kube-api-access-4x4qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.242275 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x4qn\" (UniqueName: \"kubernetes.io/projected/37ae86d9-05fc-4cae-b143-b0204cb21893-kube-api-access-4x4qn\") on node \"crc\" DevicePath \"\"" Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.343207 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ae86d9-05fc-4cae-b143-b0204cb21893" path="/var/lib/kubelet/pods/37ae86d9-05fc-4cae-b143-b0204cb21893/volumes" Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.910322 4698 scope.go:117] "RemoveContainer" containerID="5f7ef413f32e5d2a1fa996323814e68da2bf3c2673bc755eda82a52aad1a54f9" Oct 06 13:29:31 crc kubenswrapper[4698]: I1006 13:29:31.910345 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-68mml" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.225118 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wfkwq/crc-debug-jxqmn"] Oct 06 13:29:32 crc kubenswrapper[4698]: E1006 13:29:32.225819 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ae86d9-05fc-4cae-b143-b0204cb21893" containerName="container-00" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.225835 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ae86d9-05fc-4cae-b143-b0204cb21893" containerName="container-00" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.226082 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ae86d9-05fc-4cae-b143-b0204cb21893" containerName="container-00" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.227133 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.364152 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7731fb57-eedb-4173-b481-ce6a39c70c0c-host\") pod \"crc-debug-jxqmn\" (UID: \"7731fb57-eedb-4173-b481-ce6a39c70c0c\") " pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.364924 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzqk\" (UniqueName: \"kubernetes.io/projected/7731fb57-eedb-4173-b481-ce6a39c70c0c-kube-api-access-5qzqk\") pod \"crc-debug-jxqmn\" (UID: \"7731fb57-eedb-4173-b481-ce6a39c70c0c\") " pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.468309 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7731fb57-eedb-4173-b481-ce6a39c70c0c-host\") pod \"crc-debug-jxqmn\" (UID: \"7731fb57-eedb-4173-b481-ce6a39c70c0c\") " pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.468476 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qzqk\" (UniqueName: \"kubernetes.io/projected/7731fb57-eedb-4173-b481-ce6a39c70c0c-kube-api-access-5qzqk\") pod \"crc-debug-jxqmn\" (UID: \"7731fb57-eedb-4173-b481-ce6a39c70c0c\") " pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.468539 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7731fb57-eedb-4173-b481-ce6a39c70c0c-host\") pod \"crc-debug-jxqmn\" (UID: \"7731fb57-eedb-4173-b481-ce6a39c70c0c\") " pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.489683 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qzqk\" (UniqueName: \"kubernetes.io/projected/7731fb57-eedb-4173-b481-ce6a39c70c0c-kube-api-access-5qzqk\") pod \"crc-debug-jxqmn\" (UID: \"7731fb57-eedb-4173-b481-ce6a39c70c0c\") " pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.545508 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.920461 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" event={"ID":"7731fb57-eedb-4173-b481-ce6a39c70c0c","Type":"ContainerStarted","Data":"e20b3c16e768e7bd908718dd777c2376586b2882c190c4409b14ecf39e9e3d92"} Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.920832 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" event={"ID":"7731fb57-eedb-4173-b481-ce6a39c70c0c","Type":"ContainerStarted","Data":"9682f84901787566ef08c2381eef5d0f79d6db26925de83d5515f6f8c3e7d739"} Oct 06 13:29:32 crc kubenswrapper[4698]: I1006 13:29:32.937421 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" podStartSLOduration=0.93740233 podStartE2EDuration="937.40233ms" podCreationTimestamp="2025-10-06 13:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:29:32.934075509 +0000 UTC m=+6260.346767682" watchObservedRunningTime="2025-10-06 13:29:32.93740233 +0000 UTC m=+6260.350094503" Oct 06 13:29:34 crc kubenswrapper[4698]: I1006 13:29:34.937571 4698 generic.go:334] "Generic (PLEG): container finished" podID="7731fb57-eedb-4173-b481-ce6a39c70c0c" containerID="e20b3c16e768e7bd908718dd777c2376586b2882c190c4409b14ecf39e9e3d92" exitCode=0 Oct 06 13:29:34 crc kubenswrapper[4698]: I1006 13:29:34.937665 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" event={"ID":"7731fb57-eedb-4173-b481-ce6a39c70c0c","Type":"ContainerDied","Data":"e20b3c16e768e7bd908718dd777c2376586b2882c190c4409b14ecf39e9e3d92"} Oct 06 13:29:36 crc kubenswrapper[4698]: I1006 13:29:36.042072 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" Oct 06 13:29:36 crc kubenswrapper[4698]: I1006 13:29:36.131216 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7731fb57-eedb-4173-b481-ce6a39c70c0c-host\") pod \"7731fb57-eedb-4173-b481-ce6a39c70c0c\" (UID: \"7731fb57-eedb-4173-b481-ce6a39c70c0c\") " Oct 06 13:29:36 crc kubenswrapper[4698]: I1006 13:29:36.131281 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7731fb57-eedb-4173-b481-ce6a39c70c0c-host" (OuterVolumeSpecName: "host") pod "7731fb57-eedb-4173-b481-ce6a39c70c0c" (UID: "7731fb57-eedb-4173-b481-ce6a39c70c0c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:29:36 crc kubenswrapper[4698]: I1006 13:29:36.131558 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qzqk\" (UniqueName: \"kubernetes.io/projected/7731fb57-eedb-4173-b481-ce6a39c70c0c-kube-api-access-5qzqk\") pod \"7731fb57-eedb-4173-b481-ce6a39c70c0c\" (UID: \"7731fb57-eedb-4173-b481-ce6a39c70c0c\") " Oct 06 13:29:36 crc kubenswrapper[4698]: I1006 13:29:36.132103 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7731fb57-eedb-4173-b481-ce6a39c70c0c-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:29:36 crc kubenswrapper[4698]: I1006 13:29:36.143758 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7731fb57-eedb-4173-b481-ce6a39c70c0c-kube-api-access-5qzqk" (OuterVolumeSpecName: "kube-api-access-5qzqk") pod "7731fb57-eedb-4173-b481-ce6a39c70c0c" (UID: "7731fb57-eedb-4173-b481-ce6a39c70c0c"). InnerVolumeSpecName "kube-api-access-5qzqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:29:36 crc kubenswrapper[4698]: I1006 13:29:36.232752 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qzqk\" (UniqueName: \"kubernetes.io/projected/7731fb57-eedb-4173-b481-ce6a39c70c0c-kube-api-access-5qzqk\") on node \"crc\" DevicePath \"\"" Oct 06 13:29:36 crc kubenswrapper[4698]: I1006 13:29:36.959975 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" event={"ID":"7731fb57-eedb-4173-b481-ce6a39c70c0c","Type":"ContainerDied","Data":"9682f84901787566ef08c2381eef5d0f79d6db26925de83d5515f6f8c3e7d739"} Oct 06 13:29:36 crc kubenswrapper[4698]: I1006 13:29:36.960363 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9682f84901787566ef08c2381eef5d0f79d6db26925de83d5515f6f8c3e7d739" Oct 06 13:29:36 crc kubenswrapper[4698]: I1006 13:29:36.960080 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-jxqmn" Oct 06 13:29:42 crc kubenswrapper[4698]: I1006 13:29:42.529091 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wfkwq/crc-debug-jxqmn"] Oct 06 13:29:42 crc kubenswrapper[4698]: I1006 13:29:42.537307 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wfkwq/crc-debug-jxqmn"] Oct 06 13:29:43 crc kubenswrapper[4698]: I1006 13:29:43.342330 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7731fb57-eedb-4173-b481-ce6a39c70c0c" path="/var/lib/kubelet/pods/7731fb57-eedb-4173-b481-ce6a39c70c0c/volumes" Oct 06 13:29:43 crc kubenswrapper[4698]: I1006 13:29:43.764214 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wfkwq/crc-debug-5dsxh"] Oct 06 13:29:43 crc kubenswrapper[4698]: E1006 13:29:43.765536 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7731fb57-eedb-4173-b481-ce6a39c70c0c" containerName="container-00" Oct 06 13:29:43 crc kubenswrapper[4698]: I1006 13:29:43.765659 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7731fb57-eedb-4173-b481-ce6a39c70c0c" containerName="container-00" Oct 06 13:29:43 crc kubenswrapper[4698]: I1006 13:29:43.766069 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7731fb57-eedb-4173-b481-ce6a39c70c0c" containerName="container-00" Oct 06 13:29:43 crc kubenswrapper[4698]: I1006 13:29:43.766907 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" Oct 06 13:29:43 crc kubenswrapper[4698]: I1006 13:29:43.949609 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-host\") pod \"crc-debug-5dsxh\" (UID: \"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0\") " pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" Oct 06 13:29:43 crc kubenswrapper[4698]: I1006 13:29:43.949794 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn8p8\" (UniqueName: \"kubernetes.io/projected/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-kube-api-access-jn8p8\") pod \"crc-debug-5dsxh\" (UID: \"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0\") " pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" Oct 06 13:29:44 crc kubenswrapper[4698]: I1006 13:29:44.052202 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-host\") pod \"crc-debug-5dsxh\" (UID: \"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0\") " pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" Oct 06 13:29:44 crc kubenswrapper[4698]: I1006 13:29:44.052350 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-host\") pod \"crc-debug-5dsxh\" (UID: \"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0\") " pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" Oct 06 13:29:44 crc kubenswrapper[4698]: I1006 13:29:44.052382 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn8p8\" (UniqueName: \"kubernetes.io/projected/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-kube-api-access-jn8p8\") pod \"crc-debug-5dsxh\" (UID: \"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0\") " pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" Oct 06 13:29:44 crc kubenswrapper[4698]: I1006 13:29:44.089863 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn8p8\" (UniqueName: \"kubernetes.io/projected/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-kube-api-access-jn8p8\") pod \"crc-debug-5dsxh\" (UID: \"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0\") " pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" Oct 06 13:29:44 crc kubenswrapper[4698]: I1006 13:29:44.100591 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" Oct 06 13:29:45 crc kubenswrapper[4698]: I1006 13:29:45.042984 4698 generic.go:334] "Generic (PLEG): container finished" podID="05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0" containerID="ba34c437b638817b6632695eb9374f85444f1baed86639fbfdae76cb139d6709" exitCode=0 Oct 06 13:29:45 crc kubenswrapper[4698]: I1006 13:29:45.043077 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" event={"ID":"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0","Type":"ContainerDied","Data":"ba34c437b638817b6632695eb9374f85444f1baed86639fbfdae76cb139d6709"} Oct 06 13:29:45 crc kubenswrapper[4698]: I1006 13:29:45.043403 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" event={"ID":"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0","Type":"ContainerStarted","Data":"21a146df28bb9b2f86609156e9b01c8012d27943c8c49b07bd81ff1f3d257de5"} Oct 06 13:29:45 crc kubenswrapper[4698]: I1006 13:29:45.099245 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wfkwq/crc-debug-5dsxh"] Oct 06 13:29:45 crc kubenswrapper[4698]: I1006 13:29:45.118714 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wfkwq/crc-debug-5dsxh"] Oct 06 13:29:46 crc kubenswrapper[4698]: I1006 13:29:46.163740 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" Oct 06 13:29:46 crc kubenswrapper[4698]: I1006 13:29:46.199227 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn8p8\" (UniqueName: \"kubernetes.io/projected/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-kube-api-access-jn8p8\") pod \"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0\" (UID: \"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0\") " Oct 06 13:29:46 crc kubenswrapper[4698]: I1006 13:29:46.199402 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-host\") pod \"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0\" (UID: \"05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0\") " Oct 06 13:29:46 crc kubenswrapper[4698]: I1006 13:29:46.199518 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-host" (OuterVolumeSpecName: "host") pod "05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0" (UID: "05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 13:29:46 crc kubenswrapper[4698]: I1006 13:29:46.200127 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-host\") on node \"crc\" DevicePath \"\"" Oct 06 13:29:46 crc kubenswrapper[4698]: I1006 13:29:46.207578 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-kube-api-access-jn8p8" (OuterVolumeSpecName: "kube-api-access-jn8p8") pod "05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0" (UID: "05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0"). InnerVolumeSpecName "kube-api-access-jn8p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:29:46 crc kubenswrapper[4698]: I1006 13:29:46.302221 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn8p8\" (UniqueName: \"kubernetes.io/projected/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0-kube-api-access-jn8p8\") on node \"crc\" DevicePath \"\"" Oct 06 13:29:46 crc kubenswrapper[4698]: I1006 13:29:46.785383 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/util/0.log" Oct 06 13:29:46 crc kubenswrapper[4698]: I1006 13:29:46.946048 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/util/0.log" Oct 06 13:29:46 crc kubenswrapper[4698]: I1006 13:29:46.974858 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/pull/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.017688 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/pull/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.062318 4698 scope.go:117] "RemoveContainer" containerID="ba34c437b638817b6632695eb9374f85444f1baed86639fbfdae76cb139d6709" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.062362 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/crc-debug-5dsxh" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.160821 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/pull/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.200684 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/util/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.205432 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_975565365de373f4bd4c4f64eac3037d392963c94b687d5778a64d42d3kdngx_7c8020aa-27fb-4446-b7b0-63a79eae552a/extract/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.307403 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-fgmjd_d2432ca3-e684-4c81-95c8-1e57826d09d6/kube-rbac-proxy/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.340888 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0" path="/var/lib/kubelet/pods/05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0/volumes" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.421970 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-fgmjd_d2432ca3-e684-4c81-95c8-1e57826d09d6/manager/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.479215 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7d4d4f8d-wvf75_0b715645-3bcb-4443-892b-e30062c78a7f/kube-rbac-proxy/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.543936 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7d4d4f8d-wvf75_0b715645-3bcb-4443-892b-e30062c78a7f/manager/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.627251 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-jncqt_b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1/kube-rbac-proxy/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.658274 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-jncqt_b6a6e10a-c7c5-45a6-96fe-4fb3e60ffde1/manager/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.831267 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-tnv74_e2d5b718-b49a-46c0-9f1d-1e536ff62301/kube-rbac-proxy/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.890236 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-tnv74_e2d5b718-b49a-46c0-9f1d-1e536ff62301/manager/0.log" Oct 06 13:29:47 crc kubenswrapper[4698]: I1006 13:29:47.978826 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-689sr_110b7f13-850f-41a3-aadb-df0f5559ba1d/kube-rbac-proxy/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.003461 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-689sr_110b7f13-850f-41a3-aadb-df0f5559ba1d/manager/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.065689 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-gprg9_92e02173-4289-4b84-b3b2-01b78d0a7205/kube-rbac-proxy/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.177584 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-gprg9_92e02173-4289-4b84-b3b2-01b78d0a7205/manager/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.227312 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-p5rgb_c03f5f3c-6e6c-4eba-9a1f-695c23c0d995/kube-rbac-proxy/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.412785 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-4zdhn_d6f6350d-b33d-4ac5-b364-c80145b4b742/manager/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.470339 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-p5rgb_c03f5f3c-6e6c-4eba-9a1f-695c23c0d995/manager/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.473546 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-4zdhn_d6f6350d-b33d-4ac5-b364-c80145b4b742/kube-rbac-proxy/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.646610 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-25w45_b632a477-335c-4b0e-a83e-3812409b8afa/kube-rbac-proxy/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.724912 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-25w45_b632a477-335c-4b0e-a83e-3812409b8afa/manager/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.760985 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-swz27_9543eb0d-82ab-4599-b094-8789588846af/kube-rbac-proxy/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.830201 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-swz27_9543eb0d-82ab-4599-b094-8789588846af/manager/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.914557 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-hg86j_9d910961-2283-4129-a2e0-6cec10da5779/kube-rbac-proxy/0.log" Oct 06 13:29:48 crc kubenswrapper[4698]: I1006 13:29:48.978826 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-hg86j_9d910961-2283-4129-a2e0-6cec10da5779/manager/0.log" Oct 06 13:29:49 crc kubenswrapper[4698]: I1006 13:29:49.010824 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-w5cv6_fa0c0f93-841b-4e62-becb-32dcf40ae439/kube-rbac-proxy/0.log" Oct 06 13:29:49 crc kubenswrapper[4698]: I1006 13:29:49.115454 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-w5cv6_fa0c0f93-841b-4e62-becb-32dcf40ae439/manager/0.log" Oct 06 13:29:49 crc kubenswrapper[4698]: I1006 13:29:49.207189 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-rgxcq_38d45acb-51da-4535-a6a8-a317360f96fd/kube-rbac-proxy/0.log" Oct 06 13:29:49 crc kubenswrapper[4698]: I1006 13:29:49.319291 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-rgxcq_38d45acb-51da-4535-a6a8-a317360f96fd/manager/0.log" Oct 06 13:29:49 crc kubenswrapper[4698]: I1006 13:29:49.435703 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-j9bnp_56e863a6-f963-4d2f-9de6-7805ff14e90a/kube-rbac-proxy/0.log" Oct 06 13:29:49 crc kubenswrapper[4698]: I1006 13:29:49.456644 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-j9bnp_56e863a6-f963-4d2f-9de6-7805ff14e90a/manager/0.log" Oct 06 13:29:49 crc kubenswrapper[4698]: I1006 13:29:49.559397 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp_744f45cb-8563-4bf2-90f1-59f2caa1e4f4/kube-rbac-proxy/0.log" Oct 06 13:29:49 crc kubenswrapper[4698]: I1006 13:29:49.685613 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665cwjvnp_744f45cb-8563-4bf2-90f1-59f2caa1e4f4/manager/0.log" Oct 06 13:29:49 crc kubenswrapper[4698]: I1006 13:29:49.749494 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-767bcbdf69-tr7dh_7c1cbc98-12c2-409b-b673-0f3df8edd0fc/kube-rbac-proxy/0.log" Oct 06 13:29:49 crc kubenswrapper[4698]: I1006 13:29:49.918869 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5dfb49f657-vf592_5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae/kube-rbac-proxy/0.log" Oct 06 13:29:50 crc kubenswrapper[4698]: I1006 13:29:50.122071 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-spxxt_9b16e745-42a3-4aaf-ad06-dab67bab9ce7/registry-server/0.log" Oct 06 13:29:50 crc kubenswrapper[4698]: I1006 13:29:50.263940 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6d8b6f9b9-c8sxb_437c5088-93d6-4331-8671-e4e537e553a7/kube-rbac-proxy/0.log" Oct 06 13:29:50 crc kubenswrapper[4698]: I1006 13:29:50.271703 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5dfb49f657-vf592_5b03b2e1-7b4b-4eca-98ab-84dfeb5c48ae/operator/0.log" Oct 06 13:29:50 crc kubenswrapper[4698]: I1006 13:29:50.377411 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6d8b6f9b9-c8sxb_437c5088-93d6-4331-8671-e4e537e553a7/manager/0.log" Oct 06 13:29:50 crc kubenswrapper[4698]: I1006 13:29:50.487214 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-vwcqq_6e97841f-b15e-4834-a445-d2a632d7021a/kube-rbac-proxy/0.log" Oct 06 13:29:50 crc kubenswrapper[4698]: I1006 13:29:50.599942 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-85kfz_572054de-889b-43ac-abb2-8bca55810d18/operator/0.log" Oct 06 13:29:50 crc kubenswrapper[4698]: I1006 13:29:50.605551 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-vwcqq_6e97841f-b15e-4834-a445-d2a632d7021a/manager/0.log" Oct 06 13:29:50 crc kubenswrapper[4698]: I1006 13:29:50.792716 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-496mk_782cf4ae-9b34-46e9-9bfc-c7da6118c2dc/kube-rbac-proxy/0.log" Oct 06 13:29:50 crc kubenswrapper[4698]: I1006 13:29:50.823838 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-496mk_782cf4ae-9b34-46e9-9bfc-c7da6118c2dc/manager/0.log" Oct 06 13:29:50 crc kubenswrapper[4698]: I1006 13:29:50.935088 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-767bcbdf69-tr7dh_7c1cbc98-12c2-409b-b673-0f3df8edd0fc/manager/0.log" Oct 06 13:29:51 crc kubenswrapper[4698]: I1006 13:29:51.023527 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-r6ntv_6cd7d60a-943c-42e8-9b96-74e76f1338f6/kube-rbac-proxy/0.log" Oct 06 13:29:51 crc kubenswrapper[4698]: I1006 13:29:51.046183 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-pnks4_6fdb9f18-6759-435a-bae6-90271f8da5b0/kube-rbac-proxy/0.log" Oct 06 13:29:51 crc kubenswrapper[4698]: I1006 13:29:51.171471 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-pnks4_6fdb9f18-6759-435a-bae6-90271f8da5b0/manager/0.log" Oct 06 13:29:51 crc kubenswrapper[4698]: I1006 13:29:51.198084 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-r6ntv_6cd7d60a-943c-42e8-9b96-74e76f1338f6/manager/0.log" Oct 06 13:29:51 crc kubenswrapper[4698]: I1006 13:29:51.267090 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-99f6c4584-gxz2f_33858802-bf6b-42d2-bdc6-8ec2202dd1fe/kube-rbac-proxy/0.log" Oct 06 13:29:51 crc kubenswrapper[4698]: I1006 13:29:51.421203 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-99f6c4584-gxz2f_33858802-bf6b-42d2-bdc6-8ec2202dd1fe/manager/0.log" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.182070 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mmnj7"] Oct 06 13:29:52 crc kubenswrapper[4698]: E1006 13:29:52.182608 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0" containerName="container-00" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.182624 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0" containerName="container-00" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.182952 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="05dce53d-ba0c-4bf6-a96f-6bbcb5890fb0" containerName="container-00" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.184832 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.190282 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mmnj7"] Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.208688 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-utilities\") pod \"community-operators-mmnj7\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.208972 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5kbl\" (UniqueName: \"kubernetes.io/projected/3e8bd4cd-be10-456f-b471-151e66329487-kube-api-access-r5kbl\") pod \"community-operators-mmnj7\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.209005 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-catalog-content\") pod \"community-operators-mmnj7\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.310457 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-utilities\") pod \"community-operators-mmnj7\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.310517 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5kbl\" (UniqueName: \"kubernetes.io/projected/3e8bd4cd-be10-456f-b471-151e66329487-kube-api-access-r5kbl\") pod \"community-operators-mmnj7\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.310546 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-catalog-content\") pod \"community-operators-mmnj7\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.311152 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-catalog-content\") pod \"community-operators-mmnj7\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.311152 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-utilities\") pod \"community-operators-mmnj7\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.328187 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5kbl\" (UniqueName: \"kubernetes.io/projected/3e8bd4cd-be10-456f-b471-151e66329487-kube-api-access-r5kbl\") pod \"community-operators-mmnj7\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:52 crc kubenswrapper[4698]: I1006 13:29:52.511001 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:29:53 crc kubenswrapper[4698]: I1006 13:29:53.044705 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mmnj7"] Oct 06 13:29:53 crc kubenswrapper[4698]: I1006 13:29:53.114958 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmnj7" event={"ID":"3e8bd4cd-be10-456f-b471-151e66329487","Type":"ContainerStarted","Data":"53ef7e52727622505884fe4b4416dc377609223583c04f949a57d702e3e761cc"} Oct 06 13:29:54 crc kubenswrapper[4698]: I1006 13:29:54.134174 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e8bd4cd-be10-456f-b471-151e66329487" containerID="af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068" exitCode=0 Oct 06 13:29:54 crc kubenswrapper[4698]: I1006 13:29:54.134463 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmnj7" event={"ID":"3e8bd4cd-be10-456f-b471-151e66329487","Type":"ContainerDied","Data":"af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068"} Oct 06 13:29:54 crc kubenswrapper[4698]: I1006 13:29:54.136483 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 13:29:55 crc kubenswrapper[4698]: I1006 13:29:55.234716 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:29:55 crc kubenswrapper[4698]: I1006 13:29:55.235076 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:29:56 crc kubenswrapper[4698]: I1006 13:29:56.152626 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e8bd4cd-be10-456f-b471-151e66329487" containerID="fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6" exitCode=0 Oct 06 13:29:56 crc kubenswrapper[4698]: I1006 13:29:56.152735 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmnj7" event={"ID":"3e8bd4cd-be10-456f-b471-151e66329487","Type":"ContainerDied","Data":"fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6"} Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.194070 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp"] Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.197698 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.200343 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.205301 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.206321 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp"] Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.288949 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/697d6394-f9a0-490b-bca7-5c02c7344cf8-secret-volume\") pod \"collect-profiles-29329290-2lcqp\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.289040 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/697d6394-f9a0-490b-bca7-5c02c7344cf8-config-volume\") pod \"collect-profiles-29329290-2lcqp\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.289254 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j6tv\" (UniqueName: \"kubernetes.io/projected/697d6394-f9a0-490b-bca7-5c02c7344cf8-kube-api-access-5j6tv\") pod \"collect-profiles-29329290-2lcqp\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.391940 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/697d6394-f9a0-490b-bca7-5c02c7344cf8-secret-volume\") pod \"collect-profiles-29329290-2lcqp\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.392029 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/697d6394-f9a0-490b-bca7-5c02c7344cf8-config-volume\") pod \"collect-profiles-29329290-2lcqp\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.392128 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j6tv\" (UniqueName: \"kubernetes.io/projected/697d6394-f9a0-490b-bca7-5c02c7344cf8-kube-api-access-5j6tv\") pod \"collect-profiles-29329290-2lcqp\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.394213 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/697d6394-f9a0-490b-bca7-5c02c7344cf8-config-volume\") pod \"collect-profiles-29329290-2lcqp\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.398965 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/697d6394-f9a0-490b-bca7-5c02c7344cf8-secret-volume\") pod \"collect-profiles-29329290-2lcqp\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.415412 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j6tv\" (UniqueName: \"kubernetes.io/projected/697d6394-f9a0-490b-bca7-5c02c7344cf8-kube-api-access-5j6tv\") pod \"collect-profiles-29329290-2lcqp\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:00 crc kubenswrapper[4698]: I1006 13:30:00.524177 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:01 crc kubenswrapper[4698]: I1006 13:30:01.041398 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp"] Oct 06 13:30:01 crc kubenswrapper[4698]: W1006 13:30:01.049847 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod697d6394_f9a0_490b_bca7_5c02c7344cf8.slice/crio-52dc3970e5ef9dc40a4a79540111273b8b4e679555bad55768a6cad7bd0fe68f WatchSource:0}: Error finding container 52dc3970e5ef9dc40a4a79540111273b8b4e679555bad55768a6cad7bd0fe68f: Status 404 returned error can't find the container with id 52dc3970e5ef9dc40a4a79540111273b8b4e679555bad55768a6cad7bd0fe68f Oct 06 13:30:01 crc kubenswrapper[4698]: I1006 13:30:01.203475 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" event={"ID":"697d6394-f9a0-490b-bca7-5c02c7344cf8","Type":"ContainerStarted","Data":"52dc3970e5ef9dc40a4a79540111273b8b4e679555bad55768a6cad7bd0fe68f"} Oct 06 13:30:02 crc kubenswrapper[4698]: I1006 13:30:02.217657 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" event={"ID":"697d6394-f9a0-490b-bca7-5c02c7344cf8","Type":"ContainerStarted","Data":"1854b9b8bd1f96f643fa685de9ba80494d2cb594c0109585bc9d9a1548a8affc"} Oct 06 13:30:02 crc kubenswrapper[4698]: I1006 13:30:02.241884 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" podStartSLOduration=2.241860252 podStartE2EDuration="2.241860252s" podCreationTimestamp="2025-10-06 13:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 13:30:02.231363412 +0000 UTC m=+6289.644055595" watchObservedRunningTime="2025-10-06 13:30:02.241860252 +0000 UTC m=+6289.654552445" Oct 06 13:30:03 crc kubenswrapper[4698]: I1006 13:30:03.234454 4698 generic.go:334] "Generic (PLEG): container finished" podID="697d6394-f9a0-490b-bca7-5c02c7344cf8" containerID="1854b9b8bd1f96f643fa685de9ba80494d2cb594c0109585bc9d9a1548a8affc" exitCode=0 Oct 06 13:30:03 crc kubenswrapper[4698]: I1006 13:30:03.234817 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" event={"ID":"697d6394-f9a0-490b-bca7-5c02c7344cf8","Type":"ContainerDied","Data":"1854b9b8bd1f96f643fa685de9ba80494d2cb594c0109585bc9d9a1548a8affc"} Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.248797 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmnj7" event={"ID":"3e8bd4cd-be10-456f-b471-151e66329487","Type":"ContainerStarted","Data":"be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216"} Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.274676 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mmnj7" podStartSLOduration=3.263542445 podStartE2EDuration="12.27465418s" podCreationTimestamp="2025-10-06 13:29:52 +0000 UTC" firstStartedPulling="2025-10-06 13:29:54.13620222 +0000 UTC m=+6281.548894393" lastFinishedPulling="2025-10-06 13:30:03.147313945 +0000 UTC m=+6290.560006128" observedRunningTime="2025-10-06 13:30:04.266341595 +0000 UTC m=+6291.679033788" watchObservedRunningTime="2025-10-06 13:30:04.27465418 +0000 UTC m=+6291.687346363" Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.626897 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.784049 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/697d6394-f9a0-490b-bca7-5c02c7344cf8-config-volume\") pod \"697d6394-f9a0-490b-bca7-5c02c7344cf8\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.784131 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j6tv\" (UniqueName: \"kubernetes.io/projected/697d6394-f9a0-490b-bca7-5c02c7344cf8-kube-api-access-5j6tv\") pod \"697d6394-f9a0-490b-bca7-5c02c7344cf8\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.784195 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/697d6394-f9a0-490b-bca7-5c02c7344cf8-secret-volume\") pod \"697d6394-f9a0-490b-bca7-5c02c7344cf8\" (UID: \"697d6394-f9a0-490b-bca7-5c02c7344cf8\") " Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.785072 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697d6394-f9a0-490b-bca7-5c02c7344cf8-config-volume" (OuterVolumeSpecName: "config-volume") pod "697d6394-f9a0-490b-bca7-5c02c7344cf8" (UID: "697d6394-f9a0-490b-bca7-5c02c7344cf8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.785868 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/697d6394-f9a0-490b-bca7-5c02c7344cf8-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.789756 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697d6394-f9a0-490b-bca7-5c02c7344cf8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "697d6394-f9a0-490b-bca7-5c02c7344cf8" (UID: "697d6394-f9a0-490b-bca7-5c02c7344cf8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.796654 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697d6394-f9a0-490b-bca7-5c02c7344cf8-kube-api-access-5j6tv" (OuterVolumeSpecName: "kube-api-access-5j6tv") pod "697d6394-f9a0-490b-bca7-5c02c7344cf8" (UID: "697d6394-f9a0-490b-bca7-5c02c7344cf8"). InnerVolumeSpecName "kube-api-access-5j6tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.888045 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j6tv\" (UniqueName: \"kubernetes.io/projected/697d6394-f9a0-490b-bca7-5c02c7344cf8-kube-api-access-5j6tv\") on node \"crc\" DevicePath \"\"" Oct 06 13:30:04 crc kubenswrapper[4698]: I1006 13:30:04.888078 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/697d6394-f9a0-490b-bca7-5c02c7344cf8-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 13:30:05 crc kubenswrapper[4698]: I1006 13:30:05.261579 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" Oct 06 13:30:05 crc kubenswrapper[4698]: I1006 13:30:05.262465 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329290-2lcqp" event={"ID":"697d6394-f9a0-490b-bca7-5c02c7344cf8","Type":"ContainerDied","Data":"52dc3970e5ef9dc40a4a79540111273b8b4e679555bad55768a6cad7bd0fe68f"} Oct 06 13:30:05 crc kubenswrapper[4698]: I1006 13:30:05.262525 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52dc3970e5ef9dc40a4a79540111273b8b4e679555bad55768a6cad7bd0fe68f" Oct 06 13:30:05 crc kubenswrapper[4698]: I1006 13:30:05.311891 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj"] Oct 06 13:30:05 crc kubenswrapper[4698]: I1006 13:30:05.321070 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329245-kpqbj"] Oct 06 13:30:05 crc kubenswrapper[4698]: I1006 13:30:05.340755 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44fbd4c3-6c06-465d-aac4-1391de15548c" path="/var/lib/kubelet/pods/44fbd4c3-6c06-465d-aac4-1391de15548c/volumes" Oct 06 13:30:08 crc kubenswrapper[4698]: I1006 13:30:08.001828 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2j94r_c51a9b0f-7c30-4d46-8b1c-f248ce31b955/control-plane-machine-set-operator/0.log" Oct 06 13:30:08 crc kubenswrapper[4698]: I1006 13:30:08.148715 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6dhbx_eef5ed90-dd02-478f-8038-4970199b1cac/kube-rbac-proxy/0.log" Oct 06 13:30:08 crc kubenswrapper[4698]: I1006 13:30:08.175070 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6dhbx_eef5ed90-dd02-478f-8038-4970199b1cac/machine-api-operator/0.log" Oct 06 13:30:12 crc kubenswrapper[4698]: I1006 13:30:12.511889 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:30:12 crc kubenswrapper[4698]: I1006 13:30:12.512841 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:30:12 crc kubenswrapper[4698]: I1006 13:30:12.579742 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:30:13 crc kubenswrapper[4698]: I1006 13:30:13.419357 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:30:13 crc kubenswrapper[4698]: I1006 13:30:13.470384 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mmnj7"] Oct 06 13:30:15 crc kubenswrapper[4698]: I1006 13:30:15.373688 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mmnj7" podUID="3e8bd4cd-be10-456f-b471-151e66329487" containerName="registry-server" containerID="cri-o://be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216" gracePeriod=2 Oct 06 13:30:15 crc kubenswrapper[4698]: I1006 13:30:15.893230 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.016122 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-utilities\") pod \"3e8bd4cd-be10-456f-b471-151e66329487\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.016363 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5kbl\" (UniqueName: \"kubernetes.io/projected/3e8bd4cd-be10-456f-b471-151e66329487-kube-api-access-r5kbl\") pod \"3e8bd4cd-be10-456f-b471-151e66329487\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.016470 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-catalog-content\") pod \"3e8bd4cd-be10-456f-b471-151e66329487\" (UID: \"3e8bd4cd-be10-456f-b471-151e66329487\") " Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.017049 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-utilities" (OuterVolumeSpecName: "utilities") pod "3e8bd4cd-be10-456f-b471-151e66329487" (UID: "3e8bd4cd-be10-456f-b471-151e66329487"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.017370 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.029340 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8bd4cd-be10-456f-b471-151e66329487-kube-api-access-r5kbl" (OuterVolumeSpecName: "kube-api-access-r5kbl") pod "3e8bd4cd-be10-456f-b471-151e66329487" (UID: "3e8bd4cd-be10-456f-b471-151e66329487"). InnerVolumeSpecName "kube-api-access-r5kbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.075385 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e8bd4cd-be10-456f-b471-151e66329487" (UID: "3e8bd4cd-be10-456f-b471-151e66329487"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.118938 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5kbl\" (UniqueName: \"kubernetes.io/projected/3e8bd4cd-be10-456f-b471-151e66329487-kube-api-access-r5kbl\") on node \"crc\" DevicePath \"\"" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.118971 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8bd4cd-be10-456f-b471-151e66329487-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.384785 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e8bd4cd-be10-456f-b471-151e66329487" containerID="be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216" exitCode=0 Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.384847 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmnj7" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.384863 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmnj7" event={"ID":"3e8bd4cd-be10-456f-b471-151e66329487","Type":"ContainerDied","Data":"be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216"} Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.385222 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmnj7" event={"ID":"3e8bd4cd-be10-456f-b471-151e66329487","Type":"ContainerDied","Data":"53ef7e52727622505884fe4b4416dc377609223583c04f949a57d702e3e761cc"} Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.385286 4698 scope.go:117] "RemoveContainer" containerID="be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.428946 4698 scope.go:117] "RemoveContainer" containerID="fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.439545 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mmnj7"] Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.449994 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mmnj7"] Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.450405 4698 scope.go:117] "RemoveContainer" containerID="af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.503215 4698 scope.go:117] "RemoveContainer" containerID="be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216" Oct 06 13:30:16 crc kubenswrapper[4698]: E1006 13:30:16.503747 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216\": container with ID starting with be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216 not found: ID does not exist" containerID="be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.503808 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216"} err="failed to get container status \"be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216\": rpc error: code = NotFound desc = could not find container \"be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216\": container with ID starting with be9f366b6050b36fa8196898ca0f2d1866786b71018686e98c0e053bebfa1216 not found: ID does not exist" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.503828 4698 scope.go:117] "RemoveContainer" containerID="fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6" Oct 06 13:30:16 crc kubenswrapper[4698]: E1006 13:30:16.504442 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6\": container with ID starting with fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6 not found: ID does not exist" containerID="fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.504464 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6"} err="failed to get container status \"fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6\": rpc error: code = NotFound desc = could not find container \"fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6\": container with ID starting with fad3c7d657256d3043bfce86e0347fe06411b66dc3565e3f9e18ecb458c7aad6 not found: ID does not exist" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.504493 4698 scope.go:117] "RemoveContainer" containerID="af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068" Oct 06 13:30:16 crc kubenswrapper[4698]: E1006 13:30:16.504868 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068\": container with ID starting with af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068 not found: ID does not exist" containerID="af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068" Oct 06 13:30:16 crc kubenswrapper[4698]: I1006 13:30:16.504883 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068"} err="failed to get container status \"af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068\": rpc error: code = NotFound desc = could not find container \"af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068\": container with ID starting with af18f609ab0a189a73790148750288b4a87cc2cb370e6e2ac98e2a3b4ec1d068 not found: ID does not exist" Oct 06 13:30:17 crc kubenswrapper[4698]: I1006 13:30:17.350934 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8bd4cd-be10-456f-b471-151e66329487" path="/var/lib/kubelet/pods/3e8bd4cd-be10-456f-b471-151e66329487/volumes" Oct 06 13:30:20 crc kubenswrapper[4698]: I1006 13:30:20.180778 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-wpvgh_c57ea6be-96d1-4d4f-8c49-94ee240a5482/cert-manager-controller/0.log" Oct 06 13:30:20 crc kubenswrapper[4698]: I1006 13:30:20.364202 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-xncst_b73b818b-7d2e-4c3f-9622-77ee5c1fc72d/cert-manager-cainjector/0.log" Oct 06 13:30:20 crc kubenswrapper[4698]: I1006 13:30:20.461541 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-sbjmt_be67c15a-01a0-435f-995b-f61cd109d8c8/cert-manager-webhook/0.log" Oct 06 13:30:25 crc kubenswrapper[4698]: I1006 13:30:25.235556 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:30:25 crc kubenswrapper[4698]: I1006 13:30:25.236471 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:30:32 crc kubenswrapper[4698]: I1006 13:30:32.696788 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-4b488_d1028a8b-c391-4df6-978c-83a168615335/nmstate-console-plugin/0.log" Oct 06 13:30:32 crc kubenswrapper[4698]: I1006 13:30:32.870639 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-kgvmq_e62b6cf6-ece4-46f0-9aba-887633daf472/nmstate-handler/0.log" Oct 06 13:30:32 crc kubenswrapper[4698]: I1006 13:30:32.888657 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-hvpml_3fd88842-bd7f-4a22-9289-55f917571cbf/kube-rbac-proxy/0.log" Oct 06 13:30:32 crc kubenswrapper[4698]: I1006 13:30:32.938037 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-hvpml_3fd88842-bd7f-4a22-9289-55f917571cbf/nmstate-metrics/0.log" Oct 06 13:30:33 crc kubenswrapper[4698]: I1006 13:30:33.066861 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-dlzmx_143963ef-7761-472a-b173-7407f5b7befb/nmstate-operator/0.log" Oct 06 13:30:33 crc kubenswrapper[4698]: I1006 13:30:33.129829 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-w6q8p_90215016-8b6e-445d-a43a-d87dafd57bf2/nmstate-webhook/0.log" Oct 06 13:30:45 crc kubenswrapper[4698]: I1006 13:30:45.974202 4698 scope.go:117] "RemoveContainer" containerID="0b4b33c879ff6a9d0572a2b309e9b7e7173af40843f668633ae088caacd06ef2" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.380560 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-m6ns6_6fe4b880-4427-41e6-96d1-50cbc874aa6b/kube-rbac-proxy/0.log" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.477358 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-m6ns6_6fe4b880-4427-41e6-96d1-50cbc874aa6b/controller/0.log" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.546912 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-frr-files/0.log" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.730107 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-reloader/0.log" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.731622 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-metrics/0.log" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.749974 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-reloader/0.log" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.781242 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-frr-files/0.log" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.912714 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-reloader/0.log" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.931800 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-frr-files/0.log" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.951322 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-metrics/0.log" Oct 06 13:30:47 crc kubenswrapper[4698]: I1006 13:30:47.975634 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-metrics/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.154842 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-frr-files/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.162965 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-metrics/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.163004 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/cp-reloader/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.190791 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/controller/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.342097 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/frr-metrics/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.345867 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/kube-rbac-proxy/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.429830 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/kube-rbac-proxy-frr/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.573023 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/reloader/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.700639 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-vrr9z_90b467bf-2ac1-461a-83d5-db35ed92d625/frr-k8s-webhook-server/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.782738 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-84678b9ffd-5hzgg_9c5b34b7-49db-4807-8a96-dec961b07948/manager/0.log" Oct 06 13:30:48 crc kubenswrapper[4698]: I1006 13:30:48.982838 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-64685f8694-khhpz_70df7ae7-15a5-42ad-8db5-728081b24cd9/webhook-server/0.log" Oct 06 13:30:49 crc kubenswrapper[4698]: I1006 13:30:49.207314 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hb6mj_30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5/kube-rbac-proxy/0.log" Oct 06 13:30:49 crc kubenswrapper[4698]: I1006 13:30:49.747437 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hb6mj_30e3aeb4-d5c7-46b7-871e-e6e54cb2bab5/speaker/0.log" Oct 06 13:30:50 crc kubenswrapper[4698]: I1006 13:30:50.118142 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhhg6_afd684b4-4275-4f0f-89d7-3e1624a04237/frr/0.log" Oct 06 13:30:55 crc kubenswrapper[4698]: I1006 13:30:55.235274 4698 patch_prober.go:28] interesting pod/machine-config-daemon-7mj8x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 13:30:55 crc kubenswrapper[4698]: I1006 13:30:55.236202 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 13:30:55 crc kubenswrapper[4698]: I1006 13:30:55.236290 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" Oct 06 13:30:55 crc kubenswrapper[4698]: I1006 13:30:55.237194 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71"} pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 13:30:55 crc kubenswrapper[4698]: I1006 13:30:55.237296 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerName="machine-config-daemon" containerID="cri-o://e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" gracePeriod=600 Oct 06 13:30:55 crc kubenswrapper[4698]: E1006 13:30:55.400539 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:30:55 crc kubenswrapper[4698]: I1006 13:30:55.824539 4698 generic.go:334] "Generic (PLEG): container finished" podID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" exitCode=0 Oct 06 13:30:55 crc kubenswrapper[4698]: I1006 13:30:55.824599 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" event={"ID":"490a89c4-aeb3-4c8f-bdfb-c36f7fc40209","Type":"ContainerDied","Data":"e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71"} Oct 06 13:30:55 crc kubenswrapper[4698]: I1006 13:30:55.824644 4698 scope.go:117] "RemoveContainer" containerID="46f91cceb68a1980b09f2f322753209cb8d7a9cd9f5e890a7b0118ac4a87047e" Oct 06 13:30:55 crc kubenswrapper[4698]: I1006 13:30:55.825412 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:30:55 crc kubenswrapper[4698]: E1006 13:30:55.825964 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:31:02 crc kubenswrapper[4698]: I1006 13:31:02.485089 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/util/0.log" Oct 06 13:31:02 crc kubenswrapper[4698]: I1006 13:31:02.703906 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/pull/0.log" Oct 06 13:31:02 crc kubenswrapper[4698]: I1006 13:31:02.725146 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/pull/0.log" Oct 06 13:31:02 crc kubenswrapper[4698]: I1006 13:31:02.730045 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/util/0.log" Oct 06 13:31:02 crc kubenswrapper[4698]: I1006 13:31:02.916652 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/util/0.log" Oct 06 13:31:02 crc kubenswrapper[4698]: I1006 13:31:02.927748 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/pull/0.log" Oct 06 13:31:02 crc kubenswrapper[4698]: I1006 13:31:02.938942 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2w2tg5_0ece3450-b435-4ef3-ac92-2596540f52d7/extract/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.051382 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/util/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.287231 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/util/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.294509 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/pull/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.300760 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/pull/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.489137 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/util/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.497736 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/extract/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.509460 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2djdcjx_c3de473e-3386-4a45-bcf2-a98bab1b6c55/pull/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.703132 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-utilities/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.847195 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-content/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.855545 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-utilities/0.log" Oct 06 13:31:03 crc kubenswrapper[4698]: I1006 13:31:03.880546 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-content/0.log" Oct 06 13:31:04 crc kubenswrapper[4698]: I1006 13:31:04.016721 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-utilities/0.log" Oct 06 13:31:04 crc kubenswrapper[4698]: I1006 13:31:04.063921 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/extract-content/0.log" Oct 06 13:31:04 crc kubenswrapper[4698]: I1006 13:31:04.211336 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-utilities/0.log" Oct 06 13:31:04 crc kubenswrapper[4698]: I1006 13:31:04.464580 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-content/0.log" Oct 06 13:31:04 crc kubenswrapper[4698]: I1006 13:31:04.507109 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-utilities/0.log" Oct 06 13:31:04 crc kubenswrapper[4698]: I1006 13:31:04.519003 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-content/0.log" Oct 06 13:31:04 crc kubenswrapper[4698]: I1006 13:31:04.623693 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qww7g_d6cdd550-bddb-401e-af65-3bd665e4f5e7/registry-server/0.log" Oct 06 13:31:04 crc kubenswrapper[4698]: I1006 13:31:04.748906 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-content/0.log" Oct 06 13:31:04 crc kubenswrapper[4698]: I1006 13:31:04.754319 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/extract-utilities/0.log" Oct 06 13:31:04 crc kubenswrapper[4698]: I1006 13:31:04.966481 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/util/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.157499 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/pull/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.187297 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/util/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.244081 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/pull/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.464492 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/util/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.478891 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/pull/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.524800 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cx5gmv_f81cd182-baf0-4779-8a64-b90655bb2275/extract/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.677580 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sbkqv_debcb559-cc53-4d24-9eb0-233c76c3cab1/marketplace-operator/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.696322 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vf9r_50429998-15ac-4de9-b112-c6fb17e9dd18/registry-server/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.708804 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-utilities/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.933346 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-content/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.948977 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-utilities/0.log" Oct 06 13:31:05 crc kubenswrapper[4698]: I1006 13:31:05.967212 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-content/0.log" Oct 06 13:31:06 crc kubenswrapper[4698]: I1006 13:31:06.102238 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-utilities/0.log" Oct 06 13:31:06 crc kubenswrapper[4698]: I1006 13:31:06.136407 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/extract-content/0.log" Oct 06 13:31:06 crc kubenswrapper[4698]: I1006 13:31:06.153663 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-utilities/0.log" Oct 06 13:31:06 crc kubenswrapper[4698]: I1006 13:31:06.342631 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8vp82_189e0eb7-102f-4ba2-ab71-0f5cd231bd2b/registry-server/0.log" Oct 06 13:31:06 crc kubenswrapper[4698]: I1006 13:31:06.379146 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-content/0.log" Oct 06 13:31:06 crc kubenswrapper[4698]: I1006 13:31:06.409148 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-utilities/0.log" Oct 06 13:31:06 crc kubenswrapper[4698]: I1006 13:31:06.413353 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-content/0.log" Oct 06 13:31:06 crc kubenswrapper[4698]: I1006 13:31:06.572230 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-utilities/0.log" Oct 06 13:31:06 crc kubenswrapper[4698]: I1006 13:31:06.611725 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/extract-content/0.log" Oct 06 13:31:07 crc kubenswrapper[4698]: I1006 13:31:07.179362 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rrh74_e00ad13c-4719-46f8-883a-8bf6f03180ca/registry-server/0.log" Oct 06 13:31:09 crc kubenswrapper[4698]: I1006 13:31:09.329278 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:31:09 crc kubenswrapper[4698]: E1006 13:31:09.329842 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:31:19 crc kubenswrapper[4698]: I1006 13:31:19.113971 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-7c8cf85677-4vjwn_795598bd-9625-48d4-8b2b-9d5d5418391a/prometheus-operator/0.log" Oct 06 13:31:19 crc kubenswrapper[4698]: I1006 13:31:19.290261 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7cf56dcb66-wm4vx_d4be74b3-b8b9-45af-b971-bd29e82d0879/prometheus-operator-admission-webhook/0.log" Oct 06 13:31:19 crc kubenswrapper[4698]: I1006 13:31:19.317994 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7cf56dcb66-zkq9j_2986e2db-d42d-417a-b203-1eb36ae90468/prometheus-operator-admission-webhook/0.log" Oct 06 13:31:19 crc kubenswrapper[4698]: I1006 13:31:19.489257 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-cc5f78dfc-9sp88_c711bfcd-11d2-4ad7-8059-9f1f406dd064/operator/0.log" Oct 06 13:31:19 crc kubenswrapper[4698]: I1006 13:31:19.542714 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-54bc95c9fb-gl2vd_bc83ee37-67c1-4393-83c8-9ee46b2c1d30/perses-operator/0.log" Oct 06 13:31:22 crc kubenswrapper[4698]: I1006 13:31:22.329647 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:31:22 crc kubenswrapper[4698]: E1006 13:31:22.330448 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:31:35 crc kubenswrapper[4698]: I1006 13:31:35.328825 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:31:35 crc kubenswrapper[4698]: E1006 13:31:35.329520 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:31:49 crc kubenswrapper[4698]: I1006 13:31:49.336400 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:31:49 crc kubenswrapper[4698]: E1006 13:31:49.337627 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:32:04 crc kubenswrapper[4698]: I1006 13:32:04.330227 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:32:04 crc kubenswrapper[4698]: E1006 13:32:04.333736 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:32:17 crc kubenswrapper[4698]: I1006 13:32:17.335493 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:32:17 crc kubenswrapper[4698]: E1006 13:32:17.336842 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:32:32 crc kubenswrapper[4698]: I1006 13:32:32.328221 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:32:32 crc kubenswrapper[4698]: E1006 13:32:32.328894 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:32:45 crc kubenswrapper[4698]: I1006 13:32:45.329058 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:32:45 crc kubenswrapper[4698]: E1006 13:32:45.330298 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:32:56 crc kubenswrapper[4698]: I1006 13:32:56.329565 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:32:56 crc kubenswrapper[4698]: E1006 13:32:56.330653 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.352986 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tf9sn"] Oct 06 13:33:01 crc kubenswrapper[4698]: E1006 13:33:01.353975 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697d6394-f9a0-490b-bca7-5c02c7344cf8" containerName="collect-profiles" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.353991 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="697d6394-f9a0-490b-bca7-5c02c7344cf8" containerName="collect-profiles" Oct 06 13:33:01 crc kubenswrapper[4698]: E1006 13:33:01.354038 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8bd4cd-be10-456f-b471-151e66329487" containerName="extract-utilities" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.354046 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8bd4cd-be10-456f-b471-151e66329487" containerName="extract-utilities" Oct 06 13:33:01 crc kubenswrapper[4698]: E1006 13:33:01.354062 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8bd4cd-be10-456f-b471-151e66329487" containerName="registry-server" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.354069 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8bd4cd-be10-456f-b471-151e66329487" containerName="registry-server" Oct 06 13:33:01 crc kubenswrapper[4698]: E1006 13:33:01.354092 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8bd4cd-be10-456f-b471-151e66329487" containerName="extract-content" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.354100 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8bd4cd-be10-456f-b471-151e66329487" containerName="extract-content" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.354326 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="697d6394-f9a0-490b-bca7-5c02c7344cf8" containerName="collect-profiles" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.354354 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8bd4cd-be10-456f-b471-151e66329487" containerName="registry-server" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.356057 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.379261 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-utilities\") pod \"redhat-marketplace-tf9sn\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.380629 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-catalog-content\") pod \"redhat-marketplace-tf9sn\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.380825 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5j6v\" (UniqueName: \"kubernetes.io/projected/66b8509f-0a6d-4adb-a167-281d5bfc62ee-kube-api-access-f5j6v\") pod \"redhat-marketplace-tf9sn\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.403985 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf9sn"] Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.488360 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-catalog-content\") pod \"redhat-marketplace-tf9sn\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.488427 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5j6v\" (UniqueName: \"kubernetes.io/projected/66b8509f-0a6d-4adb-a167-281d5bfc62ee-kube-api-access-f5j6v\") pod \"redhat-marketplace-tf9sn\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.488843 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-catalog-content\") pod \"redhat-marketplace-tf9sn\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.488951 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-utilities\") pod \"redhat-marketplace-tf9sn\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.489311 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-utilities\") pod \"redhat-marketplace-tf9sn\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.508049 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5j6v\" (UniqueName: \"kubernetes.io/projected/66b8509f-0a6d-4adb-a167-281d5bfc62ee-kube-api-access-f5j6v\") pod \"redhat-marketplace-tf9sn\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:01 crc kubenswrapper[4698]: I1006 13:33:01.693929 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:02 crc kubenswrapper[4698]: I1006 13:33:02.179953 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf9sn"] Oct 06 13:33:02 crc kubenswrapper[4698]: W1006 13:33:02.192971 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66b8509f_0a6d_4adb_a167_281d5bfc62ee.slice/crio-e19ab84e36d6e9feaf9460ed7edb6ec23b956cb784c01a310e3ad54fe8cdaee6 WatchSource:0}: Error finding container e19ab84e36d6e9feaf9460ed7edb6ec23b956cb784c01a310e3ad54fe8cdaee6: Status 404 returned error can't find the container with id e19ab84e36d6e9feaf9460ed7edb6ec23b956cb784c01a310e3ad54fe8cdaee6 Oct 06 13:33:02 crc kubenswrapper[4698]: I1006 13:33:02.252361 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf9sn" event={"ID":"66b8509f-0a6d-4adb-a167-281d5bfc62ee","Type":"ContainerStarted","Data":"e19ab84e36d6e9feaf9460ed7edb6ec23b956cb784c01a310e3ad54fe8cdaee6"} Oct 06 13:33:03 crc kubenswrapper[4698]: I1006 13:33:03.262289 4698 generic.go:334] "Generic (PLEG): container finished" podID="66b8509f-0a6d-4adb-a167-281d5bfc62ee" containerID="a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1" exitCode=0 Oct 06 13:33:03 crc kubenswrapper[4698]: I1006 13:33:03.262325 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf9sn" event={"ID":"66b8509f-0a6d-4adb-a167-281d5bfc62ee","Type":"ContainerDied","Data":"a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1"} Oct 06 13:33:04 crc kubenswrapper[4698]: I1006 13:33:04.279068 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf9sn" event={"ID":"66b8509f-0a6d-4adb-a167-281d5bfc62ee","Type":"ContainerStarted","Data":"ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff"} Oct 06 13:33:05 crc kubenswrapper[4698]: I1006 13:33:05.301580 4698 generic.go:334] "Generic (PLEG): container finished" podID="66b8509f-0a6d-4adb-a167-281d5bfc62ee" containerID="ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff" exitCode=0 Oct 06 13:33:05 crc kubenswrapper[4698]: I1006 13:33:05.301967 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf9sn" event={"ID":"66b8509f-0a6d-4adb-a167-281d5bfc62ee","Type":"ContainerDied","Data":"ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff"} Oct 06 13:33:06 crc kubenswrapper[4698]: I1006 13:33:06.319616 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf9sn" event={"ID":"66b8509f-0a6d-4adb-a167-281d5bfc62ee","Type":"ContainerStarted","Data":"ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915"} Oct 06 13:33:06 crc kubenswrapper[4698]: I1006 13:33:06.351460 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tf9sn" podStartSLOduration=2.870357961 podStartE2EDuration="5.351419885s" podCreationTimestamp="2025-10-06 13:33:01 +0000 UTC" firstStartedPulling="2025-10-06 13:33:03.263710278 +0000 UTC m=+6470.676402451" lastFinishedPulling="2025-10-06 13:33:05.744772162 +0000 UTC m=+6473.157464375" observedRunningTime="2025-10-06 13:33:06.347358876 +0000 UTC m=+6473.760051049" watchObservedRunningTime="2025-10-06 13:33:06.351419885 +0000 UTC m=+6473.764112078" Oct 06 13:33:09 crc kubenswrapper[4698]: I1006 13:33:09.329568 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:33:09 crc kubenswrapper[4698]: E1006 13:33:09.330684 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:33:11 crc kubenswrapper[4698]: I1006 13:33:11.695382 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:11 crc kubenswrapper[4698]: I1006 13:33:11.695753 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:11 crc kubenswrapper[4698]: I1006 13:33:11.741371 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:12 crc kubenswrapper[4698]: I1006 13:33:12.489415 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:12 crc kubenswrapper[4698]: I1006 13:33:12.596378 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf9sn"] Oct 06 13:33:14 crc kubenswrapper[4698]: I1006 13:33:14.430256 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tf9sn" podUID="66b8509f-0a6d-4adb-a167-281d5bfc62ee" containerName="registry-server" containerID="cri-o://ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915" gracePeriod=2 Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.041211 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.065118 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-utilities\") pod \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.065203 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-catalog-content\") pod \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.065251 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5j6v\" (UniqueName: \"kubernetes.io/projected/66b8509f-0a6d-4adb-a167-281d5bfc62ee-kube-api-access-f5j6v\") pod \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\" (UID: \"66b8509f-0a6d-4adb-a167-281d5bfc62ee\") " Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.066281 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-utilities" (OuterVolumeSpecName: "utilities") pod "66b8509f-0a6d-4adb-a167-281d5bfc62ee" (UID: "66b8509f-0a6d-4adb-a167-281d5bfc62ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.071816 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b8509f-0a6d-4adb-a167-281d5bfc62ee-kube-api-access-f5j6v" (OuterVolumeSpecName: "kube-api-access-f5j6v") pod "66b8509f-0a6d-4adb-a167-281d5bfc62ee" (UID: "66b8509f-0a6d-4adb-a167-281d5bfc62ee"). InnerVolumeSpecName "kube-api-access-f5j6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.082404 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66b8509f-0a6d-4adb-a167-281d5bfc62ee" (UID: "66b8509f-0a6d-4adb-a167-281d5bfc62ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.167922 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.167953 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b8509f-0a6d-4adb-a167-281d5bfc62ee-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.167967 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5j6v\" (UniqueName: \"kubernetes.io/projected/66b8509f-0a6d-4adb-a167-281d5bfc62ee-kube-api-access-f5j6v\") on node \"crc\" DevicePath \"\"" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.443928 4698 generic.go:334] "Generic (PLEG): container finished" podID="66b8509f-0a6d-4adb-a167-281d5bfc62ee" containerID="ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915" exitCode=0 Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.443982 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf9sn" event={"ID":"66b8509f-0a6d-4adb-a167-281d5bfc62ee","Type":"ContainerDied","Data":"ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915"} Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.444042 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf9sn" event={"ID":"66b8509f-0a6d-4adb-a167-281d5bfc62ee","Type":"ContainerDied","Data":"e19ab84e36d6e9feaf9460ed7edb6ec23b956cb784c01a310e3ad54fe8cdaee6"} Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.444062 4698 scope.go:117] "RemoveContainer" containerID="ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.444199 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf9sn" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.475887 4698 scope.go:117] "RemoveContainer" containerID="ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.485130 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf9sn"] Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.494882 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf9sn"] Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.502131 4698 scope.go:117] "RemoveContainer" containerID="a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.570557 4698 scope.go:117] "RemoveContainer" containerID="ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915" Oct 06 13:33:15 crc kubenswrapper[4698]: E1006 13:33:15.571004 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915\": container with ID starting with ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915 not found: ID does not exist" containerID="ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.571161 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915"} err="failed to get container status \"ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915\": rpc error: code = NotFound desc = could not find container \"ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915\": container with ID starting with ffa0f48f367a06f3fdaea7f99fdbddd38f785ec6fa5ba23faaf3ee725d38f915 not found: ID does not exist" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.571187 4698 scope.go:117] "RemoveContainer" containerID="ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff" Oct 06 13:33:15 crc kubenswrapper[4698]: E1006 13:33:15.571565 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff\": container with ID starting with ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff not found: ID does not exist" containerID="ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.571593 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff"} err="failed to get container status \"ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff\": rpc error: code = NotFound desc = could not find container \"ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff\": container with ID starting with ce4b84109685600574b778cdad70866d13d7d8acc5c5067ea5e36722538fb1ff not found: ID does not exist" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.571607 4698 scope.go:117] "RemoveContainer" containerID="a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1" Oct 06 13:33:15 crc kubenswrapper[4698]: E1006 13:33:15.571919 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1\": container with ID starting with a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1 not found: ID does not exist" containerID="a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1" Oct 06 13:33:15 crc kubenswrapper[4698]: I1006 13:33:15.571967 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1"} err="failed to get container status \"a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1\": rpc error: code = NotFound desc = could not find container \"a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1\": container with ID starting with a31ba91755116605395da0eb2c0dea2ea79720d4469b658ae787c7bbdf830cf1 not found: ID does not exist" Oct 06 13:33:17 crc kubenswrapper[4698]: I1006 13:33:17.345583 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b8509f-0a6d-4adb-a167-281d5bfc62ee" path="/var/lib/kubelet/pods/66b8509f-0a6d-4adb-a167-281d5bfc62ee/volumes" Oct 06 13:33:21 crc kubenswrapper[4698]: I1006 13:33:21.329461 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:33:21 crc kubenswrapper[4698]: E1006 13:33:21.330158 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:33:30 crc kubenswrapper[4698]: I1006 13:33:30.632349 4698 generic.go:334] "Generic (PLEG): container finished" podID="ceccbe49-690a-417f-9270-ae954f09dc6d" containerID="2abef078383b079873e3b2ce34b108d1705c65053bd9b553684856d8dacef50e" exitCode=0 Oct 06 13:33:30 crc kubenswrapper[4698]: I1006 13:33:30.632443 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wfkwq/must-gather-chg92" event={"ID":"ceccbe49-690a-417f-9270-ae954f09dc6d","Type":"ContainerDied","Data":"2abef078383b079873e3b2ce34b108d1705c65053bd9b553684856d8dacef50e"} Oct 06 13:33:30 crc kubenswrapper[4698]: I1006 13:33:30.634718 4698 scope.go:117] "RemoveContainer" containerID="2abef078383b079873e3b2ce34b108d1705c65053bd9b553684856d8dacef50e" Oct 06 13:33:30 crc kubenswrapper[4698]: I1006 13:33:30.828539 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wfkwq_must-gather-chg92_ceccbe49-690a-417f-9270-ae954f09dc6d/gather/0.log" Oct 06 13:33:34 crc kubenswrapper[4698]: I1006 13:33:34.329681 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:33:34 crc kubenswrapper[4698]: E1006 13:33:34.330562 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:33:44 crc kubenswrapper[4698]: I1006 13:33:44.468788 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wfkwq/must-gather-chg92"] Oct 06 13:33:44 crc kubenswrapper[4698]: I1006 13:33:44.469408 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wfkwq/must-gather-chg92" podUID="ceccbe49-690a-417f-9270-ae954f09dc6d" containerName="copy" containerID="cri-o://516a1151609b425dd5381485ee1a155da89e6478e0b9ccbfa2e7845e9afa4fd8" gracePeriod=2 Oct 06 13:33:44 crc kubenswrapper[4698]: I1006 13:33:44.484886 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wfkwq/must-gather-chg92"] Oct 06 13:33:44 crc kubenswrapper[4698]: I1006 13:33:44.785216 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wfkwq_must-gather-chg92_ceccbe49-690a-417f-9270-ae954f09dc6d/copy/0.log" Oct 06 13:33:44 crc kubenswrapper[4698]: I1006 13:33:44.785833 4698 generic.go:334] "Generic (PLEG): container finished" podID="ceccbe49-690a-417f-9270-ae954f09dc6d" containerID="516a1151609b425dd5381485ee1a155da89e6478e0b9ccbfa2e7845e9afa4fd8" exitCode=143 Oct 06 13:33:44 crc kubenswrapper[4698]: I1006 13:33:44.929094 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wfkwq_must-gather-chg92_ceccbe49-690a-417f-9270-ae954f09dc6d/copy/0.log" Oct 06 13:33:44 crc kubenswrapper[4698]: I1006 13:33:44.929666 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/must-gather-chg92" Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.023874 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bd8t\" (UniqueName: \"kubernetes.io/projected/ceccbe49-690a-417f-9270-ae954f09dc6d-kube-api-access-8bd8t\") pod \"ceccbe49-690a-417f-9270-ae954f09dc6d\" (UID: \"ceccbe49-690a-417f-9270-ae954f09dc6d\") " Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.023954 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ceccbe49-690a-417f-9270-ae954f09dc6d-must-gather-output\") pod \"ceccbe49-690a-417f-9270-ae954f09dc6d\" (UID: \"ceccbe49-690a-417f-9270-ae954f09dc6d\") " Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.046581 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceccbe49-690a-417f-9270-ae954f09dc6d-kube-api-access-8bd8t" (OuterVolumeSpecName: "kube-api-access-8bd8t") pod "ceccbe49-690a-417f-9270-ae954f09dc6d" (UID: "ceccbe49-690a-417f-9270-ae954f09dc6d"). InnerVolumeSpecName "kube-api-access-8bd8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.126763 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bd8t\" (UniqueName: \"kubernetes.io/projected/ceccbe49-690a-417f-9270-ae954f09dc6d-kube-api-access-8bd8t\") on node \"crc\" DevicePath \"\"" Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.213354 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceccbe49-690a-417f-9270-ae954f09dc6d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ceccbe49-690a-417f-9270-ae954f09dc6d" (UID: "ceccbe49-690a-417f-9270-ae954f09dc6d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.229243 4698 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ceccbe49-690a-417f-9270-ae954f09dc6d-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.338451 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceccbe49-690a-417f-9270-ae954f09dc6d" path="/var/lib/kubelet/pods/ceccbe49-690a-417f-9270-ae954f09dc6d/volumes" Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.796471 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wfkwq_must-gather-chg92_ceccbe49-690a-417f-9270-ae954f09dc6d/copy/0.log" Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.797560 4698 scope.go:117] "RemoveContainer" containerID="516a1151609b425dd5381485ee1a155da89e6478e0b9ccbfa2e7845e9afa4fd8" Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.797608 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wfkwq/must-gather-chg92" Oct 06 13:33:45 crc kubenswrapper[4698]: I1006 13:33:45.817072 4698 scope.go:117] "RemoveContainer" containerID="2abef078383b079873e3b2ce34b108d1705c65053bd9b553684856d8dacef50e" Oct 06 13:33:47 crc kubenswrapper[4698]: I1006 13:33:47.328777 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:33:47 crc kubenswrapper[4698]: E1006 13:33:47.329485 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:33:59 crc kubenswrapper[4698]: I1006 13:33:59.329182 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:33:59 crc kubenswrapper[4698]: E1006 13:33:59.330202 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:34:10 crc kubenswrapper[4698]: I1006 13:34:10.329869 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:34:10 crc kubenswrapper[4698]: E1006 13:34:10.332790 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:34:22 crc kubenswrapper[4698]: I1006 13:34:22.328755 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:34:22 crc kubenswrapper[4698]: E1006 13:34:22.329526 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:34:34 crc kubenswrapper[4698]: I1006 13:34:34.329908 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:34:34 crc kubenswrapper[4698]: E1006 13:34:34.331085 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:34:47 crc kubenswrapper[4698]: I1006 13:34:47.329815 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:34:47 crc kubenswrapper[4698]: E1006 13:34:47.331136 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:35:01 crc kubenswrapper[4698]: I1006 13:35:01.329546 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:35:01 crc kubenswrapper[4698]: E1006 13:35:01.332376 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:35:16 crc kubenswrapper[4698]: I1006 13:35:16.329419 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:35:16 crc kubenswrapper[4698]: E1006 13:35:16.330669 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209" Oct 06 13:35:27 crc kubenswrapper[4698]: I1006 13:35:27.328544 4698 scope.go:117] "RemoveContainer" containerID="e5aeda9184920d9507e4986a6029d8b7d475a76926f7898082c3e64085d09b71" Oct 06 13:35:27 crc kubenswrapper[4698]: E1006 13:35:27.329479 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mj8x_openshift-machine-config-operator(490a89c4-aeb3-4c8f-bdfb-c36f7fc40209)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mj8x" podUID="490a89c4-aeb3-4c8f-bdfb-c36f7fc40209"